NOTE: Use Python3.8 and above
This task, enlists the async await feature in Python.
This task contains three parts of subtasks that needs to be completed for implementing Async Await in Python.
Python provides parallel programming using threads and multiprocessing. However, due to Global Interpreter Lock, Python cannot provide parallelism similar to what multi-threading works in other languages, such as Java/C++.
Effectively, only one thread runs at a time, while others are waiting for GIL to be released, so ultimately, we are running in a synchronous manner for Python.
In order to tackle that, different languages have provided language constructs for running the different blocking process concurrently.
For example, in JavaScript we use concepts of Promise, and in Python, it provides Async/Await - Asyncio. So it uses the concurrency paradigm.
-
Task 1 - Create a 3 coroutines for execution and see if you can parallelise the execution.
Details about Task
import asyncio import time async def sleep_coro(duration): await asyncio.sleep(duration) async def main(): obj1 = sleep_coro(1) obj2 = sleep_coro(2) obj3 = sleep_coro(3) # See that the three object would execute synchronously, # so it will take 1 + 2 + 3 seconds to execute. start = time.time() await obj1 await obj2 await obj3 time_taken = time.time() - start print('Time Taken {0}'.format(time_taken)) asyncio.run(main())
Now google and find how you would parallelise the execution of task.
Solution to Task 1
import asyncio import time async def sleep_coro(duration): await asyncio.sleep(duration) async def main(): obj1 = sleep_coro(1) obj2 = sleep_coro(2) obj3 = sleep_coro(3) # See that the three object would execute synchronously, # so it will take max(1, 2, 3) seconds to execute. start = time.time() await asyncio.gather(obj1, obj2, obj3) time_taken = time.time() - start print('Time Taken {0}'.format(time_taken)) asyncio.run(main())
Task for the above part - Now replace the
sleep_corowith a function that would download a page fromhttps://reqres.in/api/users?page{el}(this has been changed) whereel is an element in arr = [1, 2, 3]** -> Changed from - https://www.google.com/search?q={name_arr}Use this resource for more details for downloading a page - https://docs.aiohttp.org/en/stable/#client-example
-
Task 2
Task 2 - Write up a script to download the JSON file for the URL -
https://xkcd.com/{comic_id}/info.0.json, where thecomic_idis a natural number from1 - 200. Download the json response and save it in a file. Do it in a synchronous manner, something like this -for i in range(1, 201): # pseudo code # download response # create a file with a unique name # paste the json contents in the file
Now, using Async/Await, try to parallelise the downloading of files, so that you can accelerate the execution.
Record the time taken, and compare the time difference of execution.
- https://realpython.com/async-io-python/
- https://docs.python.org/3/library/asyncio.html
- https://docs.aiohttp.org/en/stable/#client-example
- Asyncio, Async/Await in Python, AioHttp
- From this task, you will learn that how Asyncio achieves concurrency in Python.