The emergence of Kubernetes 

A Kubernetes cluster is readied with Docker images for various applications or microservices. The cluster starts to function. Then, the next prominent step to be considered and performed is to incorporate a proper monitoring and alerting system to gain deeper knowledge about various limitations and issues with any of the constituents, such as worker nodes, pods, and services. Let's start with the basic knowledge of Kubernetes. 

Kubernetes is the popular platform that acts as the brain for any distributed container deployment. It is designed to compose multi-container applications and manage microservices-centric applications using containers, which are typically distributed across multiple clusters of container hosts. Kubernetes brings forth doable mechanisms for application deployment, service discovery, scheduling, and scaling. There are automated tools for monitoring Kubernetes environments. The relevance of a container orchestration and management platform goes up with the fast proliferation of multi-container applications, which are typically composite, business-aware, and process-centric.

As a best practice, each container hosts a microservice, and there can be multiple instances of any microservice. That is, microservices and their instances are being hosted in separate containers to guarantee service availability. The other requirements for hosting and running multi-container applications include managing application performance, enhanced service visibility, notification, and troubleshooting. The other noteworthy aspects include dynamic and appropriate infrastructure provisioning and the automated configuration of applications using configuration management tools. Service composition through container orchestration is the most critical aspect of the Kubernetes platform, apart from managing containers and clusters. When clouds are being containerized, the role of Kubernetes for next-generation cloud environments is to escalate considerably.

A Kubernetes cluster is typically made up of a set of nodes under the supervision of a master node. The master's tasks include orchestrating containers that are spread across nodes and keeping track of their state. The cluster is enabled and exposed through a REST API and a UI. The API is a kind of cluster control. The important ingredients of Kubernetes deployment are shown in the following diagram:

A pod comprises one or more containers. All containers have to run inside pods. Containers are always co-located and co-scheduled. They are run in a shared context with shared storage (https://kubernetes.io/):

  • Pods typically sit behind services. Services take care of balancing the traffic and also expose the set of pods as a single and publicly discoverable IP address/port.
  • Services can be scaled horizontally, by replica sets, which are there to create/destroy pods for each service as needed.
  • ReplicaSet is the next-generation replication controller. ReplicaSet is used for ensuring that a specified number of pod replicas are running at all times.
  • A deployment—a higher-level concept—manages ReplicaSets, and provides declarative updates to pods.
  • Namespaces are the virtual clusters that comprise one or more services.
  • Metadata allows the use of labels and tags to mark up containers based on their deployment characteristics.

Multiple services and even multiple namespaces can be spread inside a physical machine. As indicated previously, each of those services is made up of pods. Due to multiple components, the complexity of monitoring even a modest Kubernetes deployment is on the high side. Kubernetes probes, which is another key module, performs the central function of regularly monitoring the health of a container. If there is any unhealthy container, then it takes the action.

In summary, Kubernetes makes it easy to run distributed computing workloads. Workloads typically run across multiple server instances, and most of the real-world deployments involve hosting and operating multiple workloads simultaneously across the Kubernetes cluster. It is all about distributed deployment and centralized management. Thus, visualizing, sensing, and perceiving containerized environments is crucial for the success of microservices-centric applications and containerized environments. We have written extensively about monitoring containerized environments and about subjecting container data toward container intelligence and operational excellence in the next chapter.  

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset