Skip to content

Commit 63cd19d

Browse files
Empreiteiroclaude
andcommitted
feat: switch default Ollama model to granite4:350m
granite4:350m is ~676 MB on disk vs ~1.5 GB for granite3.3:2b, making the bundled provider far quicker to bootstrap on a fresh machine while still supporting tool calling. - ollama_bootstrap.sh: LANGFLOW_OLLAMA_MODEL default - docker_example/ollama-granite: ollama-init pull command, README - ollama_constants.py: catalog entry replaced - Makefile: ollama_up help text Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1 parent 41fe543 commit 63cd19d

5 files changed

Lines changed: 10 additions & 10 deletions

File tree

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,7 @@ lint: install_backend ## run linters
233233

234234

235235

236-
ollama_up: ## start a local Ollama with granite3.3:2b for the bundled model provider
236+
ollama_up: ## start a local Ollama with granite4:350m for the bundled model provider
237237
@bash scripts/setup/ollama_bootstrap.sh
238238

239239
ollama_down: ## stop and remove the bundled Ollama container (keeps the model volume)

docker_example/ollama-granite/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# Langflow + Ollama (granite3.3:2b) preset
1+
# Langflow + Ollama (granite4:350m) preset
22

3-
Turnkey stack: Langflow, Postgres, and an Ollama server with `granite3.3:2b` pre-pulled. The Langflow service sees `OLLAMA_BASE_URL=http://ollama:11434`, so the Ollama provider is auto-enabled in **Model Providers** on first login.
3+
Turnkey stack: Langflow, Postgres, and an Ollama server with `granite4:350m` pre-pulled. The Langflow service sees `OLLAMA_BASE_URL=http://ollama:11434`, so the Ollama provider is auto-enabled in **Model Providers** on first login.
44

55
## Run
66

@@ -14,14 +14,14 @@ Or, with Podman:
1414
podman compose up # podman-compose / podman compose plugin
1515
```
1616

17-
Open http://localhost:7860. The first start downloads the model (~1.5 GB), so allow a few minutes.
17+
Open http://localhost:7860. The first start downloads the model (a few hundred MB for `granite4:350m`).
1818

1919
## Services
2020

2121
- **langflow**`localhost:7860`, with `OLLAMA_BASE_URL` already set
2222
- **postgres**`localhost:5432` (`langflow/langflow`)
2323
- **ollama**`localhost:11434`
24-
- **ollama-init** — one-shot, pulls `granite3.3:2b` then exits
24+
- **ollama-init** — one-shot, pulls `granite4:350m` then exits
2525

2626
## GPU
2727

docker_example/ollama-granite/docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ services:
5858
environment:
5959
- OLLAMA_HOST=http://ollama:11434
6060
command: >
61-
"ollama pull granite3.3:2b && echo 'granite3.3:2b ready'"
61+
"ollama pull granite4:350m && echo 'granite4:350m ready'"
6262
restart: "no"
6363

6464
volumes:

scripts/setup/ollama_bootstrap.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
#!/usr/bin/env bash
2-
# Bootstrap a local Ollama server with the granite3.3:2b model so that
2+
# Bootstrap a local Ollama server with the granite4:350m model so that
33
# `make run_cli` exposes a pre-configured Ollama provider in Langflow.
44
#
55
# Resolution order:
@@ -16,7 +16,7 @@
1616
# Environment variables:
1717
# OLLAMA_PORT default 11434
1818
# OLLAMA_HOST default 127.0.0.1
19-
# LANGFLOW_OLLAMA_MODEL default granite3.3:2b
19+
# LANGFLOW_OLLAMA_MODEL default granite4:350m
2020
# LANGFLOW_OLLAMA_ENGINE override container engine: docker | podman
2121
# (auto-detected when unset)
2222
# LANGFLOW_OLLAMA_NO_INSTALL set to 1 to disable brew autoinstall
@@ -28,7 +28,7 @@ set -euo pipefail
2828

2929
OLLAMA_PORT="${OLLAMA_PORT:-11434}"
3030
OLLAMA_HOST="${OLLAMA_HOST:-127.0.0.1}"
31-
OLLAMA_MODEL="${LANGFLOW_OLLAMA_MODEL:-granite3.3:2b}"
31+
OLLAMA_MODEL="${LANGFLOW_OLLAMA_MODEL:-granite4:350m}"
3232
CONTAINER_NAME="langflow-ollama"
3333
VOLUME_NAME="langflow-ollama-data"
3434
HEALTH_TIMEOUT_S=60

src/lfx/src/lfx/base/models/ollama_constants.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@
137137
),
138138
create_model_metadata(
139139
provider="Ollama",
140-
name="granite3.3:2b",
140+
name="granite4:350m",
141141
icon="Ollama",
142142
tool_calling=True,
143143
),

0 commit comments

Comments
 (0)