Describe the bug
I deployed the local model deepseek-r1 using ollama, but it failed the health check, returning "empty-content". However, when I directly curl http://localhost:11434/api/generate -d '{"model": "deepseek-r1:latest", "prompt": "Hi"}' in the terminal, there is output. The only issue is that r1 first returns its thought process, and the response is empty, causing the health check to fail. Other models like llama3.2, which I deployed locally and do not have a thought process, can pass the health check.
curl:
To Reproduce
Deploying the deepseek-r1:latest model with ollama can reproduce the results
Expected behavior
It can support models with a thought process
Environment
- OS: [linux]
- Browser / Version: [ Chrome]
- Device: [ubuntu]
Screenshots
No response