-
-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Major Changes #3
base: main
Are you sure you want to change the base?
Conversation
On crawl txt will be filled with grabbed URLS
Default 100 threads can min or max based on your comforts
At point of 3, Sample download view of detail provided.
removal of unused modules on main.py
Is this change in review to merge in your code? |
Can you check this and merge into your code? It is optimised. |
Hi, please give me a week to review and merge. Little busy on other works. |
Any update, Please? |
Why we need to have all images URL in |
@aneesh08192 Waiting for your input, to merge this into this repo. |
@anburocky3 Initially, I have been crawling all URLs and started to make a multi-thread on downloading parallel. |
Can we do it without saving URL in unnecessary txt file? |
1. Multithreaded on
2. Improved stability and speed of time
3. Added more info to README
Major Discovery:
img
in the search bar.End of Download view: