Skip to content

[Bug]: Health checks do not support models with thinking capabilities, such as deepseek-r1 #707

@peterpark-wang

Description

@peterpark-wang

Describe the bug

I deployed the local model deepseek-r1 using ollama, but it failed the health check, returning "empty-content". However, when I directly curl http://localhost:11434/api/generate -d '{"model": "deepseek-r1:latest", "prompt": "Hi"}' in the terminal, there is output. The only issue is that r1 first returns its thought process, and the response is empty, causing the health check to fail. Other models like llama3.2, which I deployed locally and do not have a thought process, can pass the health check.

Image

curl:

Image

To Reproduce

Deploying the deepseek-r1:latest model with ollama can reproduce the results

Expected behavior

It can support models with a thought process

Environment

  • OS: [linux]
  • Browser / Version: [ Chrome]
  • Device: [ubuntu]

Screenshots

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions