@@ -37,3 +37,74 @@ def __init__(
3737 ** kwargs : Any
3838):
3939```
40+
41+ <a id = " camel.models.volcano_model.VolcanoModel._inject_reasoning_content" ></a >
42+
43+ ### _ inject_reasoning_content
44+
45+ ``` python
46+ def _inject_reasoning_content (self , messages : List[OpenAIMessage]):
47+ ```
48+
49+ Inject the last reasoning_content into assistant messages.
50+
51+ For Volcano Engine's doubao-seed models with deep thinking enabled,
52+ the reasoning_content from the model response needs to be passed back
53+ in subsequent requests for proper context management.
54+
55+ ** Parameters:**
56+
57+ - ** messages** : The original messages list.
58+
59+ ** Returns:**
60+
61+ Messages with reasoning_content added to the last assistant
62+ message that has tool_calls.
63+
64+ <a id = " camel.models.volcano_model.VolcanoModel._extract_reasoning_content" ></a >
65+
66+ ### _ extract_reasoning_content
67+
68+ ``` python
69+ def _extract_reasoning_content (self , response : ChatCompletion):
70+ ```
71+
72+ Extract reasoning_content from the model response.
73+
74+ ** Parameters:**
75+
76+ - ** response** : The model response.
77+
78+ ** Returns:**
79+
80+ The reasoning_content if available, None otherwise.
81+
82+ <a id = " camel.models.volcano_model.VolcanoModel.run" ></a >
83+
84+ ### run
85+
86+ ``` python
87+ def run (
88+ self ,
89+ messages : List[OpenAIMessage],
90+ response_format : Optional[Type[BaseModel]] = None ,
91+ tools : Optional[List[Dict[str , Any]]] = None
92+ ):
93+ ```
94+
95+ Runs inference of Volcano Engine chat completion.
96+
97+ Overrides the base run method to inject reasoning_content from
98+ previous responses into subsequent requests, as required by
99+ Volcano Engine's doubao-seed models with deep thinking enabled.
100+
101+ ** Parameters:**
102+
103+ - ** messages** : Message list with the chat history in OpenAI API format.
104+ - ** response_format** : The format of the response.
105+ - ** tools** : The schema of the tools to use for the request.
106+
107+ ** Returns:**
108+
109+ ChatCompletion in the non-stream mode, or
110+ Stream[ ChatCompletionChunk] in the stream mode.
0 commit comments