-
-
Notifications
You must be signed in to change notification settings - Fork 1k
Open
Description
I've been finding myself unable to get any sort of response regarding thinking from DeepSeek-R1 and I'm wondering if it's either something I am doing wrong with my client?
How I am setting up:
llm, err = ollama.New(
ollama.WithServerURL(config.Maid.Ollama.URL),
ollama.WithModel(config.Maid.Ollama.Model), // "deepseek-r1"
)
complete, err := llm.GenerateContent(llmContextMan.Ctx, msgContent,
llms.WithTemperature(temperature), // 0.7
llms.WithReturnThinking(config.Maid.Ollama.ThinkingInResponse), // false (though ReturnThinking as true yields nothing)
llms.WithInterleaveThinking(config.Maid.Ollama.InterleaveThinking), // false
llms.WithThinkingMode(llms.ThinkingMode(config.Maid.Ollama.ThinkingMode)), // medium
llms.WithStreamThinking(config.Maid.Ollama.StreamThinking), // true
llms.WithStreamingReasoningFunc(func(ctx context.Context, reasoningChunk, chunk []byte) error {
log.Debug(fmt.Sprintf("[AI-ReasonStream]: %s", string(chunk)))
return nil
}),
)I've outputted both .Choices-[AI-Stuff] and .Choices[0].GenerationInfo-[AI-Stuff2] using the prompt "what is 2+2?":

ThinkingTokens are at 0 and the ThinkingContent is empty so honestly I'm at a bit of a loss as to what is going on.
Metadata
Metadata
Assignees
Labels
No labels