Summary

At the start of this chapter, we described how to get the status of running containers quickly by means of built-in functions such as kubectl. Then, we expanded the discussion to look at the concepts and principles of monitoring, including why, what, and how to monitor our application on Kubernetes. Afterward, we built a monitoring system with Prometheus as the core, and set up exporters to collect metrics from our application, system components, and Kubernetes units. The fundamentals of Prometheus, such as its architecture and query domain-specific language were also introduced, so we can now use metrics to gain insights into our cluster, as well as the applications running inside, to not only retrospectively troubleshoot, but also detect potential failures. After that, we described common logging patterns and how to deal with them in Kubernetes, and deployed an EFK stack to converge logs. Finally, we turned to another important piece of infrastructure between Kubernetes and our applications, the service mesh, to get finer precision when monitoring telemetry. The system we built in this chapter enhances the reliability of our service.

In Chapter 8, Resource Management and Scaling, we'll leverage those metrics to optimize the resources used by our services.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset