Service discovery pattern

Microservices have to find one or more appropriate microservice to initiate a kind of conversation towards fulfilling the identified business functionality. As we all know, there are several service discovery mechanisms, service registries, and repositories. In the traditional web service world, we used to play around with WSDL and UDDI for service interfacing, discovery, and initiation. In the earlier era too, we were tinkering with RPC, RMI, CORBA, EJB, Jini, and so on. In the recent past, RESTful service interactions are the most common way of establishing service connectivity and service fulfillment.

However, microservices are quite distinct in the sense that they are more dynamic, varied, and versatile and many in numbers. Further on, services are predominantly made to run inside virtual machines and containers. Virtualized and containerized environments are dynamic with the inherent ability to provide live-in migration of virtualized resources and workloads. The API gateway is one solution for appropriately enabling services to discover services to correspond and complete the business functionality. The service registry is to have all the required information such as location, host, port, and so on of all the participating and contributing services. This sort of mechanism aids in sharply reducing the number of network hops for services trying to involve other services.

For enterprise-class services, the connectivity typically happens through a clustered load balancer. The location of the load balancer is predefined and determined. Services send the request to the load balancer, which in turn queries a service registry, which may be built into the load balancer. The load balancer then forwards the service request and query to an available instance of the particular service.

The popular clustering solutions such as Kubernetes (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md) and Marathon (https://mesosphere.github.io/marathon/docs/service-discovery-load-balancing.html) run a proxy on each host. The proxy actually functions as a server-side discovery router/load balancer. In order to access a service, a client service connects to the local proxy using the port assigned to that service. The proxy then forwards the request to a service instance running somewhere in the cluster. Routers, application delivery controllers (ADCs), load balancers, and other network solution modules are made available in large-scale IT environments such as clouds.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset