Rules

Rules are the binding between a template and a handler. If we already have an accesslog, logentry and a fluentd handler in the previous examples, then a rule such as this one associates the two entities:

apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: accesslogtofluentd
namespace: istio-system
spec:
match: "true"
actions:
- handler: fluentd
instances:
- accesslog.logentry

Once the rule is applied, the mixer knows it should send the access logs in the format defined previously to the fluentd at fluentd-aggegater-svc.logging:24224.

The example of deploying a fluentd instance that takes inputs from the TCP socket can be found under 7_3efk/logging-agent/fluentd-aggregator (https://github.com/PacktPublishing/DevOps-with-Kubernetes-Second-Edition/tree/master/chapter7/7-3_efk/logging-agent/fluentd-aggregator), and is configured to forward logs to the Elasticsearch instance we deployed previously. The three Istio templates for access logs can be found under 7-4_istio_fluentd_accesslog.yml (https://github.com/PacktPublishing/DevOps-with-Kubernetes-Second-Edition/blob/master/chapter7/7-4_istio_fluentd_accesslog.yml).

Let's now think about metrics. If Istio is deployed by the official chart with Prometheus enabled (it is enabled by default), then there will be a Prometheus instance in your cluster under the istio-system namespace. Additionally, Prometheus would be preconfigured to gather metrics from the Istio components. However, for various reasons, we may want to use our own Prometheus deployment, or make the one that comes with Istio dedicated to metrics from Istio components only. On the other hand, we know that the Prometheus architecture is flexible, and as long as the target components expose their metrics endpoint, we can configure our own Prometheus instance to scrape those endpoints.

Some useful endpoints from Istio components are listed here:

  • <all-components>:9093/metrics: Every Istio component exposes their internal states on port 9093.
  • <envoy-sidecar>:15090/stats/prometheus: Every envoy proxy prints the raw stats here. If we want to monitor our application, it is advisable to use the mixer template to sort out the metrics first.
  • <istio-telemetry-pods>:42422/metrics: The metrics configured by the Prometheus adapter and processed by mixer will be available here. Note that the metrics from an envoy sidecar are only available in the telemetry pod that the envoy reports to. In other words, we should use the endpoint discovery mode of Prometheus to collect metrics from all telemetry pods instead of scraping data from the telemetry service.

By default, the following metrics will be configured and available in the Prometheus path:

  • requests_total
  • request_duration_seconds
  • request_bytes
  • response_bytes
  • tcp_sent_bytes_total
  • tcp_received_bytes_total

Another way to make the metrics collected by the Prometheus instance, deployed along with official Istio releases, available to our Prometheus is by using the federation setup. This involves setting up one Prometheus instance to scrape metrics stored inside another Prometheus instance. This way, we can regard the Prometheus for Istio as the collector for all Istio-related metrics. The path for the federation feature is at /federate. Say we want to get all the metrics with the label {job="istio-mesh"}, the query parameter would be as follows:

http://<prometheus-for-istio>/federate?match[]={job="istio-mesh"}

As a result, by adding a few configuration lines, we can easily integrate Istio metrics into the existing monitoring pipeline. For a full reference on federation, take a look at the official documentation: https://prometheus.io/docs/prometheus/latest/federation/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset