Skip to content

DeepSeek Official V4 API Update DeepSeek 官方 V4 接口调整 #209

@cscomic

Description

@cscomic

中文描述

问题概述

2026 年 4 月 24 日,DeepSeek 官网上线了 deepseek-v4 接口文档并同步更新了 API 服务,引入两个新模型 deepseek-v4-prodeepseek-v4-flash。API 服务已同步上线,开发者将 model 参数修改为 deepseek-v4-prodeepseek-v4-flash 即可调用。同时旧模型名 deepseek-chatdeepseek-reasoner 将于 2026 年 7 月 24 日正式弃用。

Lumina-Note 目前尚未跟进此接口变更,导致用户无法正常使用 DeepSeek 最新 V4 模型。

核心改变内容

根据 DeepSeek 官方 API 文档,以下变更需要 Lumina-Note 适配:

1. 新增模型名称

旧模型名 新模型名 状态
deepseek-chat deepseek-v4-flash(非思考模式) 旧模型将于 2026/07/24 弃用
deepseek-reasoner deepseek-v4-flash(思考模式) 旧模型将于 2026/07/24 弃用
deepseek-v4-pro 全新旗舰模型

注意:deepseek-chatdeepseek-reasoner 两个旧名称均对应到 deepseek-v4-flash,区别在于是否启用思考模式。

2. 思考模式与推理强度控制参数

DeepSeek V4 引入了新的 API 参数来控制思考模式和推理强度:

  • thinking:JSON 对象,{"type": "enabled"} 启用思考模式
  • reasoning_effort:字符串,可选值 "high"(仅 deepseek-v4-pro 支持)

3. 上下文长度提升

上下文长度从 128K 提升至 1M tokens

4. Base URL 调整

OpenAI 兼容格式的 Base URL 仍为 https://api.deepseek.com,原来的 https://api.deepseek.com/v1 弃用
Anthropic 兼容格式为 https://api.deepseek.com/anthropic

适配建议

Lumina-Note 需要进行以下调整:

  1. 新增模型选项:在模型选择器中增加 deepseek-v4-prodeepseek-v4-flash 两个选项
  2. 请求参数调整:在 API 请求体中支持 thinkingreasoning_effort 参数
  3. 思考模式开关:为 deepseek-v4-flashdeepseek-v4-pro 提供思考模式切换选项
  4. 旧模型过渡方案:建议同时保留旧模型名作为选项,直至 2026 年 7 月 24 日弃用日期
  5. 上下文限制更新:将输入限制更新为 1M tokens,输出限制更新为 384K

环境信息

  • DeepSeek V4 上线日期:2026 年 4 月 24 日
  • 旧模型弃用日期:2026 年 7 月 24 日
  • 参考文档位址:
    DeepSeek API 官方文档

English Description

Summary

On April 24, 2026, DeepSeek officially launched the deepseek-v4 API documentation and updated the API service, introducing two new models: deepseek-v4-pro and deepseek-v4-flash. The API service is now live — developers can switch by changing the model parameter to deepseek-v4-pro or deepseek-v4-flash. The legacy model names deepseek-chat and deepseek-reasoner will be deprecated on July 24, 2026.

Lumina-Note has not yet adapted to this API change, preventing users from using the latest DeepSeek V4 models.

Key Changes

Based on the official DeepSeek API documentation, the following changes require adaptation by Lumina-Note:

1. New Model Names

Old Model Name New Model Name Status
deepseek-chat deepseek-v4-flash (non-thinking mode) To be deprecated on 2026/07/24
deepseek-reasoner deepseek-v4-flash (thinking mode) To be deprecated on 2026/07/24
deepseek-v4-pro New flagship model

Note: Both legacy names (deepseek-chat and deepseek-reasoner) now map to deepseek-v4-flash, distinguished by whether thinking mode is enabled.

2. Thinking Mode & Reasoning Effort Parameters

DeepSeek V4 introduces new API parameters for controlling thinking mode:

  • thinking: JSON object, {"type": "enabled"} to enable thinking mode
  • reasoning_effort: String, "high" option supported (only for deepseek-v4-pro)

3. Context Length Upgrade

Context length increased from 128K to 1M tokens.

4. Base URL Changed

The OpenAI-compatible Base URL remains https://api.deepseek.com, https://api.deepseek.com/v1 will discard
Anthropic-compatible format at https://api.deepseek.com/anthropic.

Adaptation Suggestions

Lumina-Note needs the following adjustments:

  1. Add model options: Include deepseek-v4-pro and deepseek-v4-flash in the model selector
  2. Update request parameters: Support thinking and reasoning_effort fields in API requests
  3. Thinking mode toggle: Provide a toggle option for deepseek-v4-flash and deepseek-v4-pro
  4. Legacy model transition: Keep legacy model names as options until the July 24, 2026 deprecation deadline
  5. Update context limits: Raise the input limit to 1M tokens and output limit to 384K

Environment

  • DeepSeek V4 Launch Date: April 24, 2026
  • Legacy Model Deprecation Date: July 24, 2026
  • Reference documentation:
    DeepSeek API Docs

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions