A test page designed to validate web crawlers' ability to respect crawling controls and anti-scraping measures. Built with modern web standards including semantic HTML5, JSON-LD structured data, and various crawling deterrents.
- Semantic HTML5 markup
- JSON-LD structured data
- Cloudflare managed robots.txt with restrictive directives
- sitemap.xml
- Anti-crawling controls and deterrents
- Rate limiting and access controls
Visit crawlstop.com to test your web crawler's compliance with crawling restrictions. The page is designed to discourage and block automated access, providing a reliable baseline for testing anti-scraping capabilities and crawler behavior when encountering access controls.
This is a site built with React and Cloudflare Workers. To preview locally:
npm install
npm run build
npm run previewMIT License - see LICENSE file for details.
Feel free to submit issues, feature requests, or pull requests. This project is open source and welcomes contributions from the community.