Skip to content
This repository was archived by the owner on Mar 4, 2025. It is now read-only.

Commit 9c8dd2f

Browse files
authored
Merge pull request #18 from 006lp/master
feat: upgrade Llama to 3.3 and add proxy support for ddg-chat
2 parents 1be1235 + 69939b4 commit 9c8dd2f

File tree

3 files changed

+22
-11
lines changed

3 files changed

+22
-11
lines changed

api/index.js

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ router.get(config.API_PREFIX + '/v1/models', () =>
7575
data: [
7676
{ id: 'gpt-4o-mini', object: 'model', owned_by: 'ddg' },
7777
{ id: 'claude-3-haiku', object: 'model', owned_by: 'ddg' },
78-
{ id: 'llama-3.1-70b', object: 'model', owned_by: 'ddg' },
78+
{ id: 'llama-3.3-70b', object: 'model', owned_by: 'ddg' },
7979
{ id: 'mixtral-8x7b', object: 'model', owned_by: 'ddg' },
8080
{ id: 'o3-mini', object: 'model', owned_by: 'ddg' },
8181
],
@@ -215,9 +215,9 @@ function messagesPrepare(messages) {
215215
if (['user', 'assistant'].includes(role)) {
216216
const contentStr = Array.isArray(message.content)
217217
? message.content
218-
.filter((item) => item.text)
219-
.map((item) => item.text)
220-
.join('') || ''
218+
.filter((item) => item.text)
219+
.map((item) => item.text)
220+
.join('') || ''
221221
: message.content
222222
content += `${role}:${contentStr};\r\n`
223223
}
@@ -247,8 +247,8 @@ function convertModel(inputModel) {
247247
case 'claude-3-haiku':
248248
model = 'claude-3-haiku-20240307'
249249
break
250-
case 'llama-3.1-70b':
251-
model = 'meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo'
250+
case 'llama-3.3-70b':
251+
model = 'meta-llama/Llama-3.3-70B-Instruct-Turbo'
252252
break
253253
case 'mixtral-8x7b':
254254
model = 'mistralai/Mixtral-8x7B-Instruct-v0.1'

docker-compose.yml

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,3 +7,13 @@ services:
77
restart: unless-stopped
88
ports:
99
- "8787:8787"
10+
#environment:
11+
# - HTTP_PROXY=http://<your_proxy_address>:<proxy_port> #http代理
12+
# - HTTPS_PROXY=http://<your_proxy_address>:<proxy_port> #https代理
13+
# - NO_PROXY="localhost,127.0.0.1" # 用于忽略本地请求
14+
#networks:
15+
# - bridge-network
16+
17+
#networks:
18+
# bridge-network:
19+
# driver: bridge

readme.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
支持 Vercel, Cloudflare Workers, Docker, Render 等
77

8-
支持 GPT4o mini, Claude 3 Haiku, Llama 3.1 70B, Mixtral 8x7B 模型
8+
支持 o3 mini, GPT 4o mini, Claude 3 Haiku, Llama 3.3 70B, Mixtral 8x7B 模型
99

1010
所有模型均由 DuckDuckGo 匿名提供
1111

@@ -72,15 +72,16 @@ curl --request POST 'https://chatcfapi.r12.top/v1/chat/completions' \
7272

7373
- gpt-4o-mini
7474
- claude-3-haiku
75-
- llama-3.1-70b
75+
- llama-3.3-70b
7676
- mixtral-8x7b
7777
- o3-mini
7878

7979
## 手动部署
8080

81-
由于 DDG API 限制单 IP 并发数,推荐使用 Vercel 进行部署,如果使用 Docker 之类的本地部署,请确保项目运行在代理池中。
81+
为了避免触发 DDG API 的并发限制,在使用 Docker 等本地部署方案时,请确保项目运行在代理池中。
82+
同时,由于 Vercel 和 Cloudflare 的 IP 已被 DDG 屏蔽(可能由于过多用户使用或触发临时风控),不再建议通过这些方式部署。
8283

83-
### Vercel
84+
### Vercel(不推荐)
8485

8586
方法一:云端 Fork 仓库部署
8687

@@ -110,7 +111,7 @@ npm run publish
110111

111112
[<img src="https://render.com/images/deploy-to-render-button.svg" alt="Deploy on Render" height="30">](https://render.com/deploy)
112113

113-
### Cloudflare Workers
114+
### Cloudflare Workers(不推荐)
114115

115116
方法一:
116117

0 commit comments

Comments
 (0)