Introducing aiohttp

The next module we are going to review is frequently used in conjunction with asyncio. This is because it provides a framework for working with asynchronous requests. It is an excellent solution for complementing the server part of a web application with Python 3.5+ as well.

The main tool for making requests is the requests module. The main problem with requests is that the thread is blocked until we obtain a response. By default, request operations are blocking. When the thread calls a method such as get or post, it pauses until the operation completes.

To download multiple resources at once, we need many threads. At this point, aiohttp allows us to make requests asynchronously. You can install aiohttp by using the pip install aiohttp command:

The documentation for aiohttp is available at http://aiohttp.readthedocs.io/en/stable, and the source code is available at https://github.com/aio-libs/aiohttp.

ClientSession is the recommended primary interface for aiohttp to make requests. ClientSession allows you to store cookies between requests and keeps objects that are common for all requests (event loop, connection, and access resources).

After you open a client session, you can use it to make requests. At this point, we will execute the request where another asynchronous operation starts. The context manager's with statement ensures it will be closed properly in all cases.

To start the execution, you need to run it in an event loop, so you need to create an instance of the asyncio loop and add a task to it.

You can find the following code in the aiohttp_request.py file:

#!/usr/local/bin/python3

import asyncio
from aiohttp import ClientSession
import time

async def request():
async with ClientSession() as session:
async with session.get("http://httpbin.org/headers") as response:
response = await response.read()
print(response.decode())

if __name__ == '__main__':
t1 = time.time()
loop = asyncio.get_event_loop()
loop.run_until_complete(request())
print(time.time() - t1, 'seconds passed')

This is the output of the preceding script:

{
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Host": "httpbin.org",
"User-Agent": "Python/3.6 aiohttp/3.5.4"
}
}

In a similar way, we can use the aiohttp module to request a URL. We can do this with aiohttp.ClientSession().get(url). In this example, we are using the yield keyword to await the response.

You can find the following code in the aiohttp_single_request.py file:

#!/usr/bin/python3

import asyncio
import aiohttp
url = 'http://httpbin.org/headers'

@asyncio.coroutine
def get_page():
resp = yield from aiohttp.ClientSession().get(url)
text = yield from resp.read()
return text

if __name__ == '__main__':
loop = asyncio.get_event_loop()
content = loop.run_until_complete(get_page())
print(content)
loop.close()

This is the output of the preceding script:


Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x000001BFE94117F0>
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x000001BFE954F708>, 789153.843)]']
connector: <aiohttp.connector.TCPConnector object at 0x000001BFE9411EB8>
b'{ "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "httpbin.org", "User-Agent": "Python/3.7 aiohttp/3.5.4" } } '
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset