Collecting logs with a logging agent per node

We know that messages we retrieved via kubectl logs are streams redirected from the stdout/stderr of a container, but it's obviously not a good idea to collect logs with kubectl logs. In fact, kubectl logs gets logs from kubelet, and kubelet aggregates logs from the container runtime underneath the host path, /var/log/containers/. The naming pattern of logs is {pod_name}_{namespace}_{container_name}_{container_id}.log.

Therefore, what we need to do to converge the standard streams of running containers is to set up logging agents on every node and configure them to tail and forward log files under the path, as shown in the following diagram:

In practice, we'd also configure the logging agent to tail the logs of the system and the Kubernetes components under /var/log on masters and nodes, such as the following:

  • kube-proxy.log
  • kube-apiserver.log
  • kube-scheduler.log
  • kube-controller-manager.log
  • etcd.log
If the Kubernetes components are managed by systemd, the log would be present in journald.

Aside from stdout/stderr, if the logs of an application are stored as files in the container and persisted via the hostPath volume, a node logging agent is capable of passing them to a node. However, for each exported log file, we have to customize their corresponding configurations in the logging agent so that they can be dispatched correctly. Moreover, we also need to name log files properly to prevent any collisions and to take care of log rotation manageable, which makes it an unscalable and unmanageable mechanism.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset