Skip to content

Commit f274a1c

Browse files
authored
Merge pull request #57 from coder-hxl/v8.0.0
V8.0.0
2 parents c798ee9 + 8950e1b commit f274a1c

13 files changed

+360
-218
lines changed

CHANGELOG.md

+23-1
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,30 @@
1+
# [v8.0.0](https://github.com/coder-hxl/x-crawl/compare/v7.1.3...v8.0.0) (2023-08-22)
2+
3+
### 🚨 Breaking Changes
4+
5+
- update dependencies
6+
7+
- puppeteer from 19.10.0 to 21.1.0.
8+
- https-proxy-agent upgraded from 5.0.1 to 7.0.1.
9+
10+
- XCrawlConfig.crawlPage's launchBrowser option renamed to puppeteerLaunch .
11+
12+
---
13+
14+
### 🚨 重大改变
15+
16+
- 更新依赖
17+
18+
- puppeteer 从 19.10.0 升至 21.1.0 。
19+
- https-proxy-agent 从 5.0.1 升至 7.0.1 。
20+
21+
- XCrawlConfig.crawlPage 的 launchBrowser 选项更名为 puppeteerLaunch 。
22+
123
# [v7.1.3](https://github.com/coder-hxl/x-crawl/compare/v7.1.2...v7.1.3) (2023-07-02)
224

325
### 🐞 Bug fixes
426

5-
- The crawlData API writes the correct data to the request body and processes the response body..
27+
- The crawlData API writes the correct data to the request body and processes the response body.
628

729
---
830

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -331,7 +331,7 @@ import xCrawl from 'x-crawl'
331331
const myXCrawl = xCrawl({
332332
maxRetry: 3,
333333
// Cancel running the browser in headless mode
334-
crawlPage: { launchBrowser: { headless: false } }
334+
crawlPage: { puppeteerLaunch: { headless: false } }
335335
})
336336

337337
myXCrawl.crawlPage('https://www.example.com').then((res) => {})
@@ -1298,7 +1298,7 @@ export interface XCrawlConfig extends CrawlCommonConfig {
12981298
baseUrl?: string
12991299
intervalTime?: IntervalTime
13001300
crawlPage?: {
1301-
launchBrowser?: PuppeteerLaunchOptions // puppeteer
1301+
puppeteerLaunch?: PuppeteerLaunchOptions // puppeteer
13021302
}
13031303
}
13041304
```

docs/cn.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -329,7 +329,7 @@ import xCrawl from 'x-crawl'
329329
const myXCrawl = xCrawl({
330330
maxRetry: 3,
331331
// 取消以无头模式运行浏览器
332-
crawlPage: { launchBrowser: { headless: false } }
332+
crawlPage: { puppeteerLaunch: { headless: false } }
333333
})
334334

335335
myXCrawl.crawlPage('https://www.example.com').then((res) => {})
@@ -1292,7 +1292,7 @@ export interface XCrawlConfig extends CrawlCommonConfig {
12921292
baseUrl?: string
12931293
intervalTime?: IntervalTime
12941294
crawlPage?: {
1295-
launchBrowser?: PuppeteerLaunchOptions // puppeteer
1295+
puppeteerLaunch?: PuppeteerLaunchOptions // puppeteer
12961296
}
12971297
}
12981298
```

package.json

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
{
22
"private": true,
33
"name": "x-crawl",
4-
"version": "7.1.3",
4+
"version": "8.0.0",
55
"author": "coderHXL",
66
"description": "x-crawl is a flexible Node.js multifunctional crawler library.",
77
"license": "MIT",
@@ -32,8 +32,8 @@
3232
},
3333
"dependencies": {
3434
"chalk": "4.1.2",
35-
"https-proxy-agent": "^7.0.0",
36-
"puppeteer": "19.10.0",
35+
"https-proxy-agent": "^7.0.1",
36+
"puppeteer": "21.1.0",
3737
"x-crawl": "link:"
3838
},
3939
"devDependencies": {

0 commit comments

Comments
 (0)