Skip to content

Can't crawl all pages #47

@Johnny9828

Description

@Johnny9828

After 3000-7000 successful crawls, the program gives an error.
It just says "An error has occured (link)" and goes back to the start screen.

is there any way to fix this in the config or does an option exist to download first 0-2000, then 2050-4000 etc. to avoid this error? Shouldn't be a ram problem as I have 64 GB DDR5

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions