Skip to content

feat: preconfigured Ollama provider with granite4:350m#12955

Draft
Empreiteiro wants to merge 3 commits intolangflow-ai:mainfrom
Empreiteiro:claude/stupefied-easley-3ed5cf
Draft

feat: preconfigured Ollama provider with granite4:350m#12955
Empreiteiro wants to merge 3 commits intolangflow-ai:mainfrom
Empreiteiro:claude/stupefied-easley-3ed5cf

Conversation

@Empreiteiro
Copy link
Copy Markdown
Collaborator

Summary

Turnkey Ollama experience: starting Langflow auto-configures the Ollama provider in Model Providers with granite4:350m ready to use. Works for both make run_cli (using docker or podman, with optional Homebrew autoinstall fallback) and a new dedicated Compose stack.

granite4:350m is the bundled default — ~676 MB on disk, supports tool calling, much quicker to bootstrap on a fresh machine than the 2 B variants. The model name is configurable via LANGFLOW_OLLAMA_MODEL.

Wiring (so the provider shows up in the UI)

  • Add OLLAMA_BASE_URL to VARIABLES_TO_GET_FROM_ENVIRONMENT so it gets promoted to a per-user Variable on first login — this is what marks the provider as enabled in the Model Providers UI.
  • Register granite4:350m in the Ollama catalog (tool_calling=True) so it appears in the dropdown.
  • run_cli and run_clic export OLLAMA_BASE_URL=http://localhost:11434 for the Langflow process.

Bootstrap script (scripts/setup/ollama_bootstrap.sh)

Idempotent. Resolution order:

  1. Something already responds on :11434 → use it.
  2. A container engine is available — docker OR podman, autodetected — start (or restart) langflow-ollama and <engine> exec an ollama pull granite4:350m. The Ollama image is pinned to docker.io/ollama/ollama:latest so it resolves under Podman's default registry policy. Engine can be forced with LANGFLOW_OLLAMA_ENGINE=docker|podman.
  3. Local ollama CLI installed → ollama serve in the background, then ollama pull.
  4. macOS + Homebrew → brew install ollama, then go to step 3. Skippable with LANGFLOW_OLLAMA_NO_INSTALL=1.
  5. None of the above → fail loudly with concrete install instructions (Podman / Docker / Ollama) instead of silently letting Langflow start with a dead OLLAMA_BASE_URL. Escape hatch: LANGFLOW_OLLAMA_OPTIONAL=1 make run_cli (warns and continues).

Make targets

  • make ollama_up — runs the bootstrap. Listed as a dependency of run_cli / run_clic.
  • make ollama_down — stops/removes the langflow-ollama container (engine-agnostic — works with podman too). Keeps the model volume.

Compose stack — docker_example/ollama-granite/

Standalone, runs end-to-end with one command:

docker compose up      # or: podman compose up

Services: langflow + postgres + ollama + ollama-init (one-shot sidecar that pulls granite4:350m then exits). NVIDIA GPU block included but commented.

How a user runs it

Local (Podman / Docker / no engine all handled):

make run_cli

Compose:

cd docker_example/ollama-granite && docker compose up

Open http://localhost:7860. First start downloads granite4:350m (~676 MB). Ollama appears in Model Providers already enabled.

Verified on a dev machine (macOS + Podman)

  • podman ps → container langflow-ollama Up
  • GET http://127.0.0.1:11434/api/tags lists granite4:350m (676 MB)
  • POST /api/generate to granite4:350m returns a real completion
  • After first login on a fresh DB, the Ollama card in Model Providers is enabled and the model is selectable
  • make run_cli exports OLLAMA_BASE_URL and Langflow log confirms Set default_fields for non-secret variable OLLAMA_BASE_URL

Test plan

  • make run_cli on a Podman host → container created, model pulled, Langflow starts wired up
  • make run_cli on a Docker-only host → same path, engine auto-detected as docker
  • make run_cli with ollama serve already on :11434 → bootstrap detects it and skips the engine path
  • make run_cli on macOS without Docker/Podman/Ollama → autoinstalls Ollama via Homebrew, then starts ollama serve
  • make run_cli on Linux without Docker/Podman/Ollama → fails loudly with install instructions (no silent pass-through)
  • LANGFLOW_OLLAMA_OPTIONAL=1 make run_cli on a host without anything → warns and continues
  • After first login on a fresh DB, the Ollama card in Model Providers is enabled and granite4:350m is available
  • docker compose up and podman compose up in docker_example/ollama-granite/ both bring the stack up end-to-end
  • make ollama_down cleans up the container under both docker and podman (without wiping the model volume)
  • Backend variable-service seed tests still pass (test_initialize_user_variables__create_and_update, etc.)

Files

  • src/lfx/src/lfx/services/settings/constants.py — adds OLLAMA_BASE_URL to env-seed list
  • src/lfx/src/lfx/base/models/ollama_constants.py — registers granite4:350m
  • Makefileollama_up / ollama_down targets, wires into run_cli / run_clic
  • scripts/setup/ollama_bootstrap.sh — engine-agnostic bootstrap
  • docker_example/ollama-granite/{docker-compose.yml,README.md} — turnkey stack

Configuration knobs (env vars)

Variable Default Purpose
OLLAMA_PORT 11434 Port to expose Ollama on
OLLAMA_HOST 127.0.0.1 Host the script probes
LANGFLOW_OLLAMA_MODEL granite4:350m Model to ensure-pull
LANGFLOW_OLLAMA_ENGINE (auto) Force docker or podman
LANGFLOW_OLLAMA_NO_INSTALL 0 Disable Homebrew autoinstall fallback
LANGFLOW_OLLAMA_OPTIONAL 0 Never exit non-zero (warn-and-continue)

🤖 Generated with Claude Code

Empreiteiro and others added 3 commits May 1, 2026 13:50
Bundles a turnkey Ollama setup so that, on a fresh start of Langflow, the
Ollama provider already shows up enabled in Model Providers and
granite3.3:2b is ready to use.

- Seed OLLAMA_BASE_URL into VARIABLES_TO_GET_FROM_ENVIRONMENT so it is
  promoted to a per-user Variable on first login (this is what marks the
  provider as configured in the Model Providers UI).
- Register granite3.3:2b in the Ollama model catalog with tool_calling=True.
- scripts/setup/ollama_bootstrap.sh: idempotent helper that brings up an
  Ollama server with granite3.3:2b. Detects an existing server on :11434,
  otherwise prefers a Docker container (named langflow-ollama) and falls
  back to a local 'ollama serve'.
- Makefile: add ollama_up / ollama_down targets; run_cli and run_clic now
  depend on ollama_up and export OLLAMA_BASE_URL for the langflow process.
- docker_example/ollama-granite/: standalone compose stack
  (langflow + postgres + ollama + one-shot ollama-init that pulls
  granite3.3:2b) with a commented NVIDIA GPU block.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The first cut of the bootstrap script silently fell back to a no-op when
neither Docker nor a local 'ollama' CLI was available, leaving Langflow
running with OLLAMA_BASE_URL pointing at nothing. Tightens the script:

- Auto-detect docker OR podman (this repo defaults to podman in the
  Makefile, so podman support is required). Honour LANGFLOW_OLLAMA_ENGINE
  to force one. Pin the image to docker.io/ollama/ollama:latest so it
  resolves under podman's default registry policy.
- On macOS with Homebrew, install Ollama automatically via 'brew install
  ollama' when no engine and no CLI are present (gated by
  LANGFLOW_OLLAMA_NO_INSTALL=1).
- Fail loudly with concrete install instructions instead of warning and
  exiting 0. New LANGFLOW_OLLAMA_OPTIONAL=1 escape hatch for CI or
  contributors that don't need the bundled provider.
- Treat 'wait_for_server' timeouts as fatal so 'make run_cli' aborts
  instead of starting Langflow with a dead Ollama URL.
- ollama_down Make target now also detects podman.
- README mentions the podman compose path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
granite4:350m is ~676 MB on disk vs ~1.5 GB for granite3.3:2b, making
the bundled provider far quicker to bootstrap on a fresh machine while
still supporting tool calling.

- ollama_bootstrap.sh: LANGFLOW_OLLAMA_MODEL default
- docker_example/ollama-granite: ollama-init pull command, README
- ollama_constants.py: catalog entry replaced
- Makefile: ollama_up help text

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 1, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d9f37250-6d20-4aa6-8adf-f1086a42f1f1

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions Bot added the enhancement New feature or request label May 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant