Asynchronous messaging patterns for event-driven microservices

Here are a few popular asynchronous messaging patterns that enable the faster realization of event-driven and asynchronous messaging microservices. Let's refer to the following points:

  • Event sourcing: Today, events are penetrative and pervasive, and occur in large numbers due to the broader and deeper proliferation of multi-faceted sensors, actuators, drones, robots, electronics, digitized elements, connected devices, factory machineries, social networking sites, integrated applications, decentralized microservices, distributed data sources, stores, and so on. Thus, events from varied and geographically distributed sources get streamed into an event store, which is termed as a database of events. This event store provides an API to enable various consuming services to subscribe and use authorized events. The event store primarily operates as a message broker. Event sourcing persists the state of a business entity such as an order service as a sequence of state-changing events. Whenever there is a change in the state of the business entity, a new event gets triggered and is meticulously appended to the list of events. This is in a way like how the aspect of log aggregation works. Event sourcing is an excellent way to add visibility to what is happening to a service. The application can easily reconstruct a business entity's current state by replaying the events:

The most common flow for event sourcing is as follows:

  • Message Receiver: Receives and converts the incoming request into an event message.
  • Event Store: Stores the event messages sequentially. It notifies the listeners/consumers.
  • Event Listener: Represents the code in charge of executing the respective business logic according to the event type.

Apache Kafka is a widely used event store. Events are grouped into multiple logical collections called topics. These topics are subsequently partitioned toward parallel processing. A partitioned topic functions like a queue. That is, events are delivered to their consumers in the order they were received. However, unlike a queue, events are persisted to be made available to other consumers. Older messages get automatically deleted based on the stream's time-to-live (TTL) setting. Event consumers can consume the event message at any time and replay the messages any number of times. Apache Kafka can scale quickly to handle millions of events per second.

The idea is to represent every application's state transition in a form of an immutable event. Events are then stored in a log or journal form as they occur. Events can also be queried and stored permanently. This ultimately shows how the application's state, as a whole, has evolved over time.

Publisher/subscriber: This is emerging as the way forward to accomplish asynchronous real-time data distribution. The producer does not know about the subscribers. This pattern is used to comprehensively decouple microservices. Subscribers register to receive message without any knowledge of the publishers. This pattern is primarily for ensuring applications to scale handle any number of subscribers. The middleware broker is to guarantee the required scalability. Microservices architecture is capable of creating loosely and lightly coupled microservices. Hence, independently deploying and updating besides horizontally scaling microservices is quite easy. However, while composing microservices using service orchestration, we get sticky microservices, and hence experts bat for service choreography. This pattern is shown in the following diagram:

In this example, the PORTFOLIO service has to add a stock position. Instead of calling the accounts service directly, the PORTFOLIO service publishes an event to the POSITION ADDED event stream. The accounts service has subscribed to that event stream, and hence it receives a notification. Rather than calling the accounts service directly, it publishes an event to the POSITION ADDED event stream. The accounts microservice has subscribed to that event stream, so it gets the notification. This indirect and intermediary-enabled asynchronous communication ensures that the participating services are totally decoupled. This means that services can be replaced and substituted with other advanced services. The services can be quickly scaled out by additional containerized microservice instances. The only flaw here is that there is no centralized monitoring and a management system is in place:

Event firehose pattern: When more events are being produced by several producers and there are many consumers waiting for event messages, there is a need for a common hub for messages exchange. The event messages get exchanged via topics. As indicated, in the case of asynchronous command calls, the exchange happens via queues. The common implementation of this pattern looks a little something like this:

Asynchronous command calls: There are certain scenarios mandating for proper orchestration over asynchronous calls. These are usually done for local integration use cases. The other prominent use cases include connecting closely related microservices to exchange messages with a delivery guarantee. Here, microservices interact in an asynchronous manner. In this pattern, messages are typically exchanged using queues. Queues facilitate messages to be exchanged in a point-to-point manner. Most of the conversations here are short-lived. It is a traditional broker-centric use case, but reliably connecting endpoints through asynchronous communication.

This pattern is demanded when one microservice has to publish an event for a second microservice to process and then has to wait to receive and read an appropriate reply event from that second microservice. Consider the previously mentioned portfolio example. A standard REST API call tells the portfolio service to add a stock position. The portfolio service posts an event to the position-added queue for the accounts service to process. The service then waits for the accounts service to post a reply event to the account updated queue so that the original REST API call can return data received from that event to the client service. 

Saga pattern: We all know that each microservice is being empowered through its own database. However, some business operations involve multiple services. Each atomic business operation that spans multiple services may involve several transactions on a technical level. The challenge here is how to ensure data consistency in a multi-database environment. That is, when multiple databases are to be accessed, the traditional local ACID-compliant transaction is not sufficient. That is, the situation here demands distributed transactions. One viable option to solve this problem in a hassle-free manner is the leverage of an XA protocol implementing the two-phase commit (2PC) pattern. But for web-scale applications, 2PC may not work well. To eliminate the disadvantages of 2PC, experts have recommended to trade ACID for basically available, soft state and eventually consistent (BASE).

The experts say to implement each business transaction that spans across multiple services as a saga. That is, a saga is presented as a sequence of local transactions. Sagas are viewed as an application-level distributed coordination of multiple transactions. Each local transaction updates the database and publishes an event message to the next local transaction in the saga. If a local transaction fails due to one or other reasons, then the saga executes a series of compensating activities that undo the changes that were made by the preceding local transactions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset