English | 简体中文
x-crawl is a flexible nodejs crawler library.
If it helps you, please give the x-crawl repository a Star to support it.
- Crawl pages, JSON, file resources, etc. with simple configuration.
- The built-in puppeteer crawls the page, and uses the jsdom library to parse the page.
- Support asynchronous/synchronous way to crawl data.
- Support Promise/Callback method to get the result.
- Polling function, timing crawling.
- Anthropomorphic request interval.
- Written in TypeScript, providing generics.
The crawlPage API internally uses the puppeteer library to crawl pages.
The following can be done:
- Generate screenshots and PDFs of pages.
- Crawl a SPA (Single-Page Application) and generate pre-rendered content (i.e. "SSR" (Server-Side Rendering)).
- Automate form submission, UI testing, keyboard input, etc.
Take NPM as an example:
npm install x-crawl
Regular crawling: Get the recommended pictures of the youtube homepage every other day as an example:
// 1.Import module ES/CJS
import path from 'node:path'
import xCrawl from 'x-crawl'
// 2.Create a crawler instance
const myXCrawl = xCrawl({
timeout: 10000, // overtime time
intervalTime: { max: 3000, min: 2000 } // control request frequency
})
// 3.Set the crawling task
// Call the startPolling API to start the polling function, and the callback function will be called every other day
myXCrawl.startPolling({ d: 1 }, () => {
// Call crawlPage API to crawl Page
myXCrawl.crawlPage('https://www.youtube.com/').then((res) => {
const { jsdom } = res.data // By default, the JSDOM library is used to parse Page
// Get the cover image element of the Promoted Video
const imgEls = jsdom.window.document.querySelectorAll(
'.yt-core-image--fill-parent-width'
)
// set request configuration
const requestConfig = []
imgEls.forEach((item) => {
if (item.src) {
requestConfig.push(item.src)
}
})
// Call the crawlFile API to crawl pictures
myXCrawl.crawlFile({
requestConfig,
fileConfig: { storeDir: path.resolve(__dirname, './upload') }
})
})
})
running result:
Note: Do not crawl at will, you can check the robots.txt protocol before crawling. This is just to demonstrate how to use x-crawl.
Create a new application instance via xCrawl():
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
// options
})
Related options can refer to XCrawlBaseConfig .
A crawler application instance has two crawling modes: asynchronous/synchronous, and each crawler instance can only choose one of them.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
mode: 'async'
})
The mode option defaults to async .
- async: asynchronous request, in batch requests, the next request is made without waiting for the current request to complete
- sync: synchronous request, in batch requests, you need to wait for this request to complete before making the next request
If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.
import xCrawl from 'x-crawl'
const myXCrawl1 = xCrawl({
// options
})
const myXCrawl2 = xCrawl({
// options
})
Crawl a page via crawlPage()
myXCrawl.crawlPage('https://xxx.com').then(res => {
const { jsdom, page } = res.data
})
Crawl interface data through crawlData()
const requestConfig = [
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx', method: 'POST', data: { name: 'coderhxl' } },
{ url: 'https://xxx.com/xxxx' }
]
myXCrawl.crawlData({ requestConfig }).then(res => {
// deal with
})
Crawl file data via crawlFile()
import path from 'node:path'
const requestConfig = [ 'https://xxx.com/xxxx', 'https://xxx.com/xxxx' ]
myXCrawl. crawlFile({
requestConfig,
fileConfig: {
storeDir: path.resolve(__dirname, './upload') // storage folder
}
}).then(fileInfos => {
console. log(fileInfos)
})
Setting the requests interval time can prevent too much concurrency and avoid too much pressure on the server.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
intervalTime: { max: 3000, min: 1000 }
})
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
- number: The time that must wait before each request is fixed
- Object: Randomly select a value from max and min, which is more anthropomorphic
The first request is not to trigger the interval.
The writing method of requestConfig is very flexible, there are 5 types in total, which can be:
- string
- array of strings
- object
- array of objects
- string plus object array
// requestConfig writing method 1:
const requestConfig1 = 'https://xxx.com/xxxx'
// requestConfig writing method 2:
const requestConfig2 = [ 'https://xxx.com/xxxx', 'https://xxx.com/xxxx', 'https://xxx.com/xxxx' ]
// requestConfig writing method 3:
const requestConfig3 = {
url: 'https://xxx.com/xxxx',
method: 'POST',
data: { name: 'coderhxl' }
}
// requestConfig writing method 4:
const requestConfig4 = [
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx', method: 'POST', data: { name: 'coderhxl' } },
{ url: 'https://xxx.com/xxxx' }
]
// requestConfig writing method 5:
const requestConfig5 = [
'https://xxx.com/xxxx',
{ url: 'https://xxx.com/xxxx', method: 'POST', data: { name: 'coderhxl' } },
'https://xxx.com/xxxx'
]
It can be selected according to the actual situation.
Create a crawler instance via call xCrawl. The request queue is maintained by the instance method itself, not by the instance itself.
For more detailed types, please see the Types section
function xCrawl(baseConfig?: XCrawlBaseConfig): XCrawlInstance
const myXCrawl = xCrawl({
baseUrl: 'https://xxx.com',
timeout: 10000,
// The interval between requests, multiple requests are valid
intervalTime: {
max: 2000,
min: 1000
}
})
Passing baseConfig is for crawlPage/crawlData/crawlFile to use these values by default.
Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the crawlPage/crawlData/crawlFile example.
The mode option defaults to async .
- async: In batch requests, the next request is made without waiting for the current request to complete
- sync: In batch requests, you need to wait for this request to complete before making the next request
If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
- number: The time that must wait before each request is fixed
- Object: Randomly select a value from max and min, which is more anthropomorphic
The first request is not to trigger the interval.
crawlPage is the method of the above myXCrawl instance, usually used to crawl page.
- Look at the CrawlPageConfig type
- Look at the CrawlPage type
function crawlPage: (
config: CrawlPageConfig,
callback?: (res: CrawlPage) => void
) => Promise<CrawlPage>
myXCrawl.crawlPage('/xxx').then((res) => {
const { jsdom } = res.data
console.log(jsdom.window.document.querySelector('title')?.textContent)
})
Get the page instance from res.data.page, which can do interactive operations such as events. For specific usage, refer to page.
crawlData is the method of the above myXCrawl instance, which is usually used to crawl APIs to obtain JSON data and so on.
- Look at the CrawlDataConfig type
- Look at the CrawlResCommonV1 type
- Look at the CrawlResCommonArrV1 type
function crawlData: <T = any>(
config: CrawlDataConfig,
callback?: (res: CrawlResCommonV1<T>) => void
) => Promise<CrawlResCommonArrV1<T>>
const requestConfig = [
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx', method: 'POST', data: { name: 'coderhxl' } },
{ url: 'https://xxx.com/xxxx' }
]
myXCrawl.crawlData({ requestConfig }).then(res => {
console.log(res)
})
crawlFile is the method of the above myXCrawl instance, which is usually used to crawl files, such as pictures, pdf files, etc.
- Look at the CrawlFileConfig type
- Look at the CrawlResCommonV1 type
- Look at the CrawlResCommonArrV1 type
- Look at the FileInfo type
function crawlFile: (
config: CrawlFileConfig,
callback?: (res: CrawlResCommonV1<FileInfo>) => void
) => Promise<CrawlResCommonArrV1<FileInfo>>
const requestConfig = [ 'https://xxx.com/xxxx', 'https://xxx.com/xxxx' ]
myXCrawl.crawlFile({
requestConfig,
fileConfig: {
storeDir: path.resolve(__dirname, './upload') // storage folder
}
}).then(fileInfos => {
console.log(fileInfos)
})
crawlPolling is a method of the myXCrawl instance, typically used to perform polling operations, such as getting news every once in a while.
- Look at the StartPollingConfig type
function startPolling(
config: StartPollingConfig,
callback: (count: number) => void
): void
myXCrawl.startPolling({ h: 1, m: 30 }, () => {
// will be executed every one and a half hours
// crawlPage/crawlData/crawlFile
})
interface AnyObject extends Object {
[key: string | number | symbol]: any
}
type Method = 'get' | 'GET' | 'delete' | 'DELETE' | 'head' | 'HEAD' | 'options' | 'OPTONS' | 'post' | 'POST' | 'put' | 'PUT' | 'patch' | 'PATCH' | 'purge' | 'PURGE' | 'link' | 'LINK' | 'unlink' | 'UNLINK'
interface RequestConfigObject {
url: string
method?: Method
headers?: AnyObject
params?: AnyObject
data?: any
timeout?: number
proxy?: string
}
type RequestConfig = string | RequestConfigObject
interface MergeRequestConfigObject {
url: string
timeout?: number
proxy?: string
}
type IntervalTime = number | {
max: number
min?: number
}
interface XCrawlBaseConfig {
baseUrl?: string
timeout?: number
intervalTime?: IntervalTime
mode?: 'async' | 'sync'
proxy?: string
}
interface CrawlBaseConfigV1 {
requestConfig: RequestConfig | RequestConfig[]
intervalTime?: IntervalTime
}
type CrawlPageConfig = string | MergeRequestConfigObject
interface CrawlDataConfig extends CrawlBaseConfigV1 {
}
interface CrawlFileConfig extends CrawlBaseConfigV1 {
fileConfig: {
storeDir: string // Store folder
extension?: string // Filename extension
}
}
interface StartPollingConfig {
d?: number // day
h?: number // hour
m?: number // minute
}
interface CrawlCommon<T> {
id: number
statusCode: number | undefined
headers: IncomingHttpHeaders // nodejs: http type
data: T
}
type CrawlResCommonArrV1<T> = CrawlResCommonV1<T>[]
interface FileInfo {
fileName: string
mimeType: string
size: number
filePath: string
}
interface CrawlPage {
httpResponse: HTTPResponse | null // The type of HTTPResponse in the puppeteer library
data: {
page: Page // The type of Page in the puppeteer library
jsdom: JSDOM // The type of JSDOM in the jsdom library
}
}
If you have any questions or needs , please submit Issues in https://github.com/coder-hxl/x-crawl/issues .