Via a report from the web site owner, we have found that the crawler appears to be ignoring robots.txt. The instructions at https://www.ukmodelshops.co.uk/robots.txt disallow access to /form/... but for some reason it's still crawling there, e.g. from the Kafka crawled topic:
{
"hop_path": "LRLLL",
"status_code": 404,
"seed": "",
"warc_filename": null,
"annotations": "ip:91.207.50.60",
"thread": 494,
"content_digest": "sha1:3LNM6LWAVQ35EREXCHMWFKHHPFKSSQTD",
"url": "https://ukmodelshops.co.uk/form/supplierAmend/13136/40352-IanRathboneModelRailwayPainting",
"via": "https://ukmodelshops.co.uk/suppliers/op/40352-IanRathboneModelRailwayPainting",
"warc_offset": null,
"crawl_name": "dc2023",
"start_time_plus_duration": "20230826193201479+459",
"extra_info": {
"scopeDecision": "ACCEPT by rule #1 WatchedFileSurtPrefixedDecideRule"
},
"size": 12948,
"host": "ukmodelshops.co.uk",
"mimetype": "text/html",
"content_length": 11820,
"timestamp": "2023-08-26T19:32:01.939Z"
}
The crawler is fetching robots.txt (as seen at the internal crawled-URL CDX), and can be seen internally at https://www.webarchive.org.uk/act/wayback/en/archive/20230820144656/https://ukmodelshops.co.uk/robots.txt (more recent crawls are identical so are not displayed currently in Wayback.
The robots.txt file is quite long and complex, so perhaps the robots.txt parser is having problems with it. Unfortunately, this is quite difficult to debug effectively.
(As an interim measure, I've blocked the DC from crawling that site).