The asyncio library

The asyncio (https://docs.python.org/3/library/asyncio.html) library, which was originally an experiment called Tulip run by Guido, provides all the infrastructure to build asynchronous programs based on an event loop.

The library predates the introduction of async, await, and native coroutines in the language.

The asyncio library is inspired by Twisted, and offers classes that mimic Twisted transports and protocols. Building a network application based on these consists of combining a transport class (like TCP) and a protocol class (such as HTTP), and using callbacks to orchestrate the execution of the various parts.

But, with the introduction of native coroutines, callback-style programming is less appealing, since it's much more readable to orchestrate the execution order via await calls. You can use coroutine with asyncio protocol and transport classes, but the original design was not meant for that and requires a bit of extra work.

However, the central feature is the event loop API and all the functions used to schedule how the coroutines will get executed. An event loop uses the operating system I/O poller (devpoll, epoll, and kqueue) to register the execution of a function given an I/O event.

For instance, the loop can wait for some data to be available in a socket to trigger a function that will treat the data. But that pattern can be generalized to any event. For instance, when coroutine A awaits for coroutine B to be finished, the call to asyncio sets an I/O event, which is triggered when coroutine B is over and makes ;coroutine A wait for that event to resume.

The result is that if your program is split into a lot of interdependent coroutines, their executions are interleaved. The beauty of this pattern is that a single-threaded application can run thousands of coroutines concurrently without having to be thread-safe and without all the complexity that it entails.

To build an asynchronous microservice, the typical pattern is like this:

    async def my_view(request): 
        query = await process_request(request) 
        data = await some_database.query(query) 
        response = await build_response(data) 
        return response 

An event loop running this coroutine for each incoming request will be able to accept hundreds of new requests while waiting for each step to finish.

If the same service were built with Flask, and typically run with a single thread, each new request would have to wait for the completion of the previous one to get the attention of the Flask app. Hammering the service with several hundred concurrent requests will issue timeouts in no time.

The execution time for a single request is the same in both cases, but the ability to run many requests concurrently and interleave their execution is what makes asynchronous applications better for I/O-bound microservices. Our application can do a lot of things with the CPU while waiting for a call to a database to return.

And if some of your services have CPU-bound tasks, asyncio provides a function to run the code in a separate thread or process from within the loop.

In the next two sections, we will present two frameworks based on asyncio, which can be used to build microservices.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset