Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion ollama/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
version: '3.7'

services:
Expand All @@ -8,7 +8,7 @@
PROXY_AUTH_ADD: "false"

ollama:
image: ollama/ollama:0.17.4@sha256:b165fa2700dc374f8d4f9e8314d81c7be75487c76eee2b46ef4b511a496b736c
image: ollama/ollama:0.17.5@sha256:719122581b6932e1240ae70d788859089cb80d17e23cd4f98ba960b0290f70cb
environment:
OLLAMA_ORIGINS: "*"
OLLAMA_CONTEXT_LENGTH: 8192
Expand Down
15 changes: 7 additions & 8 deletions ollama/umbrel-app.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ id: ollama
name: Ollama
tagline: Self-host open source AI models like DeepSeek-R1, Llama, and more
category: ai
version: "0.17.4"
version: "0.17.5"
port: 11434
description: >-
Ollama allows you to download and run advanced AI models directly on your own hardware. Self-hosting AI models ensures full control over your data and protects your privacy.
Expand Down Expand Up @@ -38,16 +38,15 @@ defaultUsername: ""
defaultPassword: ""
dependencies: []
releaseNotes: >-
This release adds new models and improvements to tool call handling.
This release fixes crashes and memory issues in Qwen 3.5 models.


Key highlights in this release:
- New Qwen 3.5 multimodal model family is now available
- New LFM2 hybrid model family optimized for on-device deployment is now available
- Tool call indices are now included in parallel tool calls
- Fixed tool calls in Qwen 3 and Qwen 3.5 not being parsed correctly during thinking
- Added Nemotron architecture support
- Web search capabilities added for models that support tools
- Fixed crash in Qwen 3.5 models when split over GPU and CPU
- Fixed Qwen 3.5 models repeating themselves due to missing presence penalty (you may need to re-download affected models, e.g. `ollama pull qwen3.5:35b`)
- Fixed memory issues and crashes in the MLX runner
- Fixed inability to run models imported from Qwen 3.5 GGUF files
- `ollama run --verbose` now shows peak memory usage when using the MLX engine


Full release notes are available at https://github.com/ollama/ollama/releases
Expand Down