Working with Kubernetes logs

Kubernetes comes with three daemon processes on master: API server, scheduler, and controller manager. Under the /var/log folder, there are three corresponding log files recording the logs of these processes:

Daemon on master

Log file

Description

API server

apiserver.log

Logs for API calls.

Scheduler

k8s-scheduler.log

Logs of scheduler data for any containers scheduling events

Controller manager

controller-manager.log

Logs for showing any events or issues relate to controller manager

On nodes, we have a kubelet process to handle container operations and report to the master:

Daemon on node

Log file

Description

kubelet

kubelet.log

Logs for any issues happening in container

On both masters and nodes, there is another log file named kube-proxy.log to record any network connection issues.

Getting ready

We will use the log collection platform ELK, which was introduced in the previous section, to collect Kubernetes logs as a centralized log platform. For the setting of ELK, we'd suggest you to review the collecting application logs section again. Before we start collecting the Kubernetes logs, knowing the data structure in the logs is important. The preceding logs are this format:

<log level><date> <timestamp> <indicator> <source file>:<line number>] <logs>

The following is an example:

E0328 00:46:50.870875    3189 reflector.go:227] pkg/proxy/config/api.go:60: Failed to watch *api.Endpoints: too old resource version: 45128 (45135)

By the heading character of the lines in the log file, we are able to know the log severity of this line:

  • D: DEBUG
  • I: INFO
  • W: WARN
  • E: ERROR
  • F: FATAL

How to do it…

We will still use the grok filter in the logstash setting, as discussed in the previous section, but we might need to write our custom pattern for the <log level><date> pattern, which is listed at the beginning of the log line. We will create a pattern file under the current directory:

// list custom patterns
# cat ./patterns/k8s
LOGLEVEL    [DEFIW]
DATE        [0-9]{4}
K8SLOGLEVEL %{LOGLEVEL:level}%{DATE}

The preceding setting is used to split the E0328 pattern into level=E and DATE=0328. The following is an example of how to send k8s-apiserver.log into the ElasticSearch cluster:

// list config file for k8s-apiserver.log in logstash
# cat apiserver.conf
input {
  file {
    path => "/var/log/k8s-apiserver.log"
  }
}

filter {
  grok {
    patterns_dir => ["./patterns"]
    match => { "message" => "%{K8SLOGLEVEL} %{TIME}    %{NUMBER} %{PROG:program}:%{POSINT:line}] %{GREEDYDATA:message}" }
  }
}

output {
  elasticsearch {
    hosts => ["_ES_IP_:_ES_PORT_"]
    index => "k8s-apiserver"
  }

  stdout { codec => rubydebug }
}

For the input, we will use the file plugin (https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html), which adds the path of the k8s-apiserver.log. We will use patterns_dir in grok to specify the definition of our custom patterns K8SLOGLEVEL. The hosts' configuration in the output elasticsearch section should be specified to your Elasticsearch IP and port number. The following is a sample output:

// start logstash with config apiserver.conf
# bin/logstash -f apiserver.conf
Settings: Default pipeline workers: 1
Pipeline main started
{
       "message" => [
        [0] "E0403 15:55:24.706498    2979 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 47419 (47437)",
        [1] "apiserver received an error that is not an unversioned.Status: too old resource version: 47419 (47437)"
    ],
 "@timestamp" => 2016-04-03T15:55:25.709Z,
         "level" => "E",
          "host" => "kube-master1",
       "program" => "errors.go",
          "path" => "/var/log/k8s-apiserver.log",
          "line" => "62",
      "@version" => "1"
}
{
       "message" => [
        [0] "E0403 15:55:24.706784    2979 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 47419 (47437)",
        [1] "apiserver received an error that is not an unversioned.Status: too old resource version: 47419 (47437)"
    ],
    "@timestamp" => 2016-04-03T15:55:25.711Z,
         "level" => "E",
          "host" => "kube-master1",
       "program" => "errors.go",
          "path" => "/var/log/k8s-apiserver.log",
          "line" => "62",
      "@version" => "1"
}

It shows the current host, the log path, log level, the triggered program, and the total message. The other logs are all in the same format, so it is easy to replicate the settings. Just specify different indexes from k8s-apiserver to the others. Then, you are free to search the logs via Kibana, or get the other tools integrated with Elasticsearch to get notifications or so on.

See also

Check out the following recipes:

  • The Configuring master and Configuring nodes recipes in Chapter 1, Building Your Own Kubernetes
  • Collecting application logs
  • Monitoring master and node
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset