中文描述
问题概述
2026 年 4 月 24 日,DeepSeek 官网上线了 deepseek-v4 接口文档并同步更新了 API 服务,引入两个新模型 deepseek-v4-pro 和 deepseek-v4-flash。API 服务已同步上线,开发者将 model 参数修改为 deepseek-v4-pro 或 deepseek-v4-flash 即可调用。同时旧模型名 deepseek-chat 与 deepseek-reasoner 将于 2026 年 7 月 24 日正式弃用。
Lumina-Note 目前尚未跟进此接口变更,导致用户无法正常使用 DeepSeek 最新 V4 模型。
核心改变内容
根据 DeepSeek 官方 API 文档,以下变更需要 Lumina-Note 适配:
1. 新增模型名称
| 旧模型名 |
新模型名 |
状态 |
deepseek-chat |
deepseek-v4-flash(非思考模式) |
旧模型将于 2026/07/24 弃用 |
deepseek-reasoner |
deepseek-v4-flash(思考模式) |
旧模型将于 2026/07/24 弃用 |
| — |
deepseek-v4-pro |
全新旗舰模型 |
注意:deepseek-chat 与 deepseek-reasoner 两个旧名称均对应到 deepseek-v4-flash,区别在于是否启用思考模式。
2. 思考模式与推理强度控制参数
DeepSeek V4 引入了新的 API 参数来控制思考模式和推理强度:
thinking:JSON 对象,{"type": "enabled"} 启用思考模式
reasoning_effort:字符串,可选值 "high"(仅 deepseek-v4-pro 支持)
3. 上下文长度提升
上下文长度从 128K 提升至 1M tokens。
4. Base URL 调整
OpenAI 兼容格式的 Base URL 仍为 https://api.deepseek.com,原来的 https://api.deepseek.com/v1 弃用
Anthropic 兼容格式为 https://api.deepseek.com/anthropic。
适配建议
Lumina-Note 需要进行以下调整:
- 新增模型选项:在模型选择器中增加
deepseek-v4-pro 和 deepseek-v4-flash 两个选项
- 请求参数调整:在 API 请求体中支持
thinking 和 reasoning_effort 参数
- 思考模式开关:为
deepseek-v4-flash 和 deepseek-v4-pro 提供思考模式切换选项
- 旧模型过渡方案:建议同时保留旧模型名作为选项,直至 2026 年 7 月 24 日弃用日期
- 上下文限制更新:将输入限制更新为 1M tokens,输出限制更新为 384K
环境信息
English Description
Summary
On April 24, 2026, DeepSeek officially launched the deepseek-v4 API documentation and updated the API service, introducing two new models: deepseek-v4-pro and deepseek-v4-flash. The API service is now live — developers can switch by changing the model parameter to deepseek-v4-pro or deepseek-v4-flash. The legacy model names deepseek-chat and deepseek-reasoner will be deprecated on July 24, 2026.
Lumina-Note has not yet adapted to this API change, preventing users from using the latest DeepSeek V4 models.
Key Changes
Based on the official DeepSeek API documentation, the following changes require adaptation by Lumina-Note:
1. New Model Names
| Old Model Name |
New Model Name |
Status |
deepseek-chat |
deepseek-v4-flash (non-thinking mode) |
To be deprecated on 2026/07/24 |
deepseek-reasoner |
deepseek-v4-flash (thinking mode) |
To be deprecated on 2026/07/24 |
| — |
deepseek-v4-pro |
New flagship model |
Note: Both legacy names (deepseek-chat and deepseek-reasoner) now map to deepseek-v4-flash, distinguished by whether thinking mode is enabled.
2. Thinking Mode & Reasoning Effort Parameters
DeepSeek V4 introduces new API parameters for controlling thinking mode:
thinking: JSON object, {"type": "enabled"} to enable thinking mode
reasoning_effort: String, "high" option supported (only for deepseek-v4-pro)
3. Context Length Upgrade
Context length increased from 128K to 1M tokens.
4. Base URL Changed
The OpenAI-compatible Base URL remains https://api.deepseek.com, https://api.deepseek.com/v1 will discard
Anthropic-compatible format at https://api.deepseek.com/anthropic.
Adaptation Suggestions
Lumina-Note needs the following adjustments:
- Add model options: Include
deepseek-v4-pro and deepseek-v4-flash in the model selector
- Update request parameters: Support
thinking and reasoning_effort fields in API requests
- Thinking mode toggle: Provide a toggle option for
deepseek-v4-flash and deepseek-v4-pro
- Legacy model transition: Keep legacy model names as options until the July 24, 2026 deprecation deadline
- Update context limits: Raise the input limit to 1M tokens and output limit to 384K
Environment
- DeepSeek V4 Launch Date: April 24, 2026
- Legacy Model Deprecation Date: July 24, 2026
- Reference documentation:
DeepSeek API Docs
中文描述
问题概述
2026 年 4 月 24 日,DeepSeek 官网上线了 deepseek-v4 接口文档并同步更新了 API 服务,引入两个新模型
deepseek-v4-pro和deepseek-v4-flash。API 服务已同步上线,开发者将 model 参数修改为deepseek-v4-pro或deepseek-v4-flash即可调用。同时旧模型名deepseek-chat与deepseek-reasoner将于 2026 年 7 月 24 日正式弃用。Lumina-Note 目前尚未跟进此接口变更,导致用户无法正常使用 DeepSeek 最新 V4 模型。
核心改变内容
根据 DeepSeek 官方 API 文档,以下变更需要 Lumina-Note 适配:
1. 新增模型名称
deepseek-chatdeepseek-v4-flash(非思考模式)deepseek-reasonerdeepseek-v4-flash(思考模式)deepseek-v4-pro注意:
deepseek-chat与deepseek-reasoner两个旧名称均对应到deepseek-v4-flash,区别在于是否启用思考模式。2. 思考模式与推理强度控制参数
DeepSeek V4 引入了新的 API 参数来控制思考模式和推理强度:
thinking:JSON 对象,{"type": "enabled"}启用思考模式reasoning_effort:字符串,可选值"high"(仅deepseek-v4-pro支持)3. 上下文长度提升
上下文长度从 128K 提升至 1M tokens。
4. Base URL 调整
OpenAI 兼容格式的 Base URL 仍为
https://api.deepseek.com,原来的https://api.deepseek.com/v1弃用Anthropic 兼容格式为
https://api.deepseek.com/anthropic。适配建议
Lumina-Note 需要进行以下调整:
deepseek-v4-pro和deepseek-v4-flash两个选项thinking和reasoning_effort参数deepseek-v4-flash和deepseek-v4-pro提供思考模式切换选项环境信息
DeepSeek API 官方文档
English Description
Summary
On April 24, 2026, DeepSeek officially launched the deepseek-v4 API documentation and updated the API service, introducing two new models:
deepseek-v4-proanddeepseek-v4-flash. The API service is now live — developers can switch by changing the model parameter todeepseek-v4-proordeepseek-v4-flash. The legacy model namesdeepseek-chatanddeepseek-reasonerwill be deprecated on July 24, 2026.Lumina-Note has not yet adapted to this API change, preventing users from using the latest DeepSeek V4 models.
Key Changes
Based on the official DeepSeek API documentation, the following changes require adaptation by Lumina-Note:
1. New Model Names
deepseek-chatdeepseek-v4-flash(non-thinking mode)deepseek-reasonerdeepseek-v4-flash(thinking mode)deepseek-v4-proNote: Both legacy names (
deepseek-chatanddeepseek-reasoner) now map todeepseek-v4-flash, distinguished by whether thinking mode is enabled.2. Thinking Mode & Reasoning Effort Parameters
DeepSeek V4 introduces new API parameters for controlling thinking mode:
thinking: JSON object,{"type": "enabled"}to enable thinking modereasoning_effort: String,"high"option supported (only fordeepseek-v4-pro)3. Context Length Upgrade
Context length increased from 128K to 1M tokens.
4. Base URL Changed
The OpenAI-compatible Base URL remains
https://api.deepseek.com,https://api.deepseek.com/v1will discardAnthropic-compatible format at
https://api.deepseek.com/anthropic.Adaptation Suggestions
Lumina-Note needs the following adjustments:
deepseek-v4-proanddeepseek-v4-flashin the model selectorthinkingandreasoning_effortfields in API requestsdeepseek-v4-flashanddeepseek-v4-proEnvironment
DeepSeek API Docs