Service mesh architectures

There are a couple of choices for leveraging the service mesh solutions. A service mesh solution can be presented as a library so that any microservices-centric application can import and use it on demand. We are used to import programming language packages, libraries, and classes in a typical application building and execution. Libraries such as Hystrix and Ribbon are well-known examples of this approach. This works well for applications that are exclusively written in one language.

There is a limited adoption of this library approach as microservicecs-centric applications are being coded using different languages. There are other approaches too, which are explained as follows:

Node agent: In this architecture, there is a separate agent running on every node. This setup can service a heterogeneous mix of workloads. It is just the opposite of the library model. Linkerd's recommended deployment in Kubernetes works like this. F5's Application Service Proxy (ASP) and the Kubernetes default kube-proxy work the same. As there is one agent on every node, there is a need for some cooperation from the underlying infrastructure. Most applications can't just choose their own TCP stack, guess an ephemeral port number, and send or receive TCP packets directly. They simply delegate all of these to the OS infrastructure.

This model emphasizes work-resource sharing. If a node agent allocates some memory to buffer data for one microservice, it might use that buffer for data for another service in a few seconds. That is, the resources are shared in an efficient manner. However, managing shared resources is beset with challenges, and hence there is extra coding required for resource management. Another work resource that can be easily shared is configuration information. Instead of sharing configuration details to every pod, the node agent architecture facilitates the sharing per node.

Sidecar is the new model that's widely used by Istio with Envoy. Conduit also uses a sidecar approach. In sidecar deployments, an application proxy in a containerized format gets deployed to every application container. Multiple copies of the proxy may have to be deployed if there are redundant application containers.

The load balancer typically sit a between the client and the server. Advanced service mesh solutions attach a sidecar proxy to a client-side library, and hence every client gets equal access to the load balancer. This means that, the single point of failure of any traditional load balancer gets eliminated. The traditional load balancer is a server-side load balancer, but the sidecar proxy enables client-side load balancing.

The central responsibility of a service mesh solution is to efficiently handle the core networking tasks such as load balancing and service discovery. For ensuring heightened service resiliency, a service mesh solution implements resiliency design patterns, such as circuit breaking, retries, timeouts, and fault-tolerance. When services are resilient, the resulting application is reliable. The underlying infrastructural modules also have to be highly available and stable. IT systems and business workloads have to collectively contribute for business continuity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset