feat: preconfigured Ollama provider with granite4:350m#12955
Draft
Empreiteiro wants to merge 3 commits intolangflow-ai:mainfrom
Draft
feat: preconfigured Ollama provider with granite4:350m#12955Empreiteiro wants to merge 3 commits intolangflow-ai:mainfrom
Empreiteiro wants to merge 3 commits intolangflow-ai:mainfrom
Conversation
Bundles a turnkey Ollama setup so that, on a fresh start of Langflow, the Ollama provider already shows up enabled in Model Providers and granite3.3:2b is ready to use. - Seed OLLAMA_BASE_URL into VARIABLES_TO_GET_FROM_ENVIRONMENT so it is promoted to a per-user Variable on first login (this is what marks the provider as configured in the Model Providers UI). - Register granite3.3:2b in the Ollama model catalog with tool_calling=True. - scripts/setup/ollama_bootstrap.sh: idempotent helper that brings up an Ollama server with granite3.3:2b. Detects an existing server on :11434, otherwise prefers a Docker container (named langflow-ollama) and falls back to a local 'ollama serve'. - Makefile: add ollama_up / ollama_down targets; run_cli and run_clic now depend on ollama_up and export OLLAMA_BASE_URL for the langflow process. - docker_example/ollama-granite/: standalone compose stack (langflow + postgres + ollama + one-shot ollama-init that pulls granite3.3:2b) with a commented NVIDIA GPU block. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The first cut of the bootstrap script silently fell back to a no-op when neither Docker nor a local 'ollama' CLI was available, leaving Langflow running with OLLAMA_BASE_URL pointing at nothing. Tightens the script: - Auto-detect docker OR podman (this repo defaults to podman in the Makefile, so podman support is required). Honour LANGFLOW_OLLAMA_ENGINE to force one. Pin the image to docker.io/ollama/ollama:latest so it resolves under podman's default registry policy. - On macOS with Homebrew, install Ollama automatically via 'brew install ollama' when no engine and no CLI are present (gated by LANGFLOW_OLLAMA_NO_INSTALL=1). - Fail loudly with concrete install instructions instead of warning and exiting 0. New LANGFLOW_OLLAMA_OPTIONAL=1 escape hatch for CI or contributors that don't need the bundled provider. - Treat 'wait_for_server' timeouts as fatal so 'make run_cli' aborts instead of starting Langflow with a dead Ollama URL. - ollama_down Make target now also detects podman. - README mentions the podman compose path. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
granite4:350m is ~676 MB on disk vs ~1.5 GB for granite3.3:2b, making the bundled provider far quicker to bootstrap on a fresh machine while still supporting tool calling. - ollama_bootstrap.sh: LANGFLOW_OLLAMA_MODEL default - docker_example/ollama-granite: ollama-init pull command, README - ollama_constants.py: catalog entry replaced - Makefile: ollama_up help text Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Contributor
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Turnkey Ollama experience: starting Langflow auto-configures the Ollama provider in Model Providers with
granite4:350mready to use. Works for bothmake run_cli(usingdockerorpodman, with optional Homebrew autoinstall fallback) and a new dedicated Compose stack.granite4:350mis the bundled default — ~676 MB on disk, supports tool calling, much quicker to bootstrap on a fresh machine than the 2 B variants. The model name is configurable viaLANGFLOW_OLLAMA_MODEL.Wiring (so the provider shows up in the UI)
OLLAMA_BASE_URLtoVARIABLES_TO_GET_FROM_ENVIRONMENTso it gets promoted to a per-userVariableon first login — this is what marks the provider as enabled in the Model Providers UI.granite4:350min the Ollama catalog (tool_calling=True) so it appears in the dropdown.run_cliandrun_clicexportOLLAMA_BASE_URL=http://localhost:11434for the Langflow process.Bootstrap script (
scripts/setup/ollama_bootstrap.sh)Idempotent. Resolution order:
:11434→ use it.dockerORpodman, autodetected — start (or restart)langflow-ollamaand<engine> execanollama pull granite4:350m. The Ollama image is pinned todocker.io/ollama/ollama:latestso it resolves under Podman's default registry policy. Engine can be forced withLANGFLOW_OLLAMA_ENGINE=docker|podman.ollamaCLI installed →ollama servein the background, thenollama pull.brew install ollama, then go to step 3. Skippable withLANGFLOW_OLLAMA_NO_INSTALL=1.OLLAMA_BASE_URL. Escape hatch:LANGFLOW_OLLAMA_OPTIONAL=1 make run_cli(warns and continues).Make targets
make ollama_up— runs the bootstrap. Listed as a dependency ofrun_cli/run_clic.make ollama_down— stops/removes thelangflow-ollamacontainer (engine-agnostic — works with podman too). Keeps the model volume.Compose stack —
docker_example/ollama-granite/Standalone, runs end-to-end with one command:
docker compose up # or: podman compose upServices:
langflow + postgres + ollama + ollama-init(one-shot sidecar that pullsgranite4:350mthen exits). NVIDIA GPU block included but commented.How a user runs it
Local (Podman / Docker / no engine all handled):
Compose:
Open http://localhost:7860. First start downloads
granite4:350m(~676 MB). Ollama appears in Model Providers already enabled.Verified on a dev machine (macOS + Podman)
podman ps→ containerlangflow-ollamaUpGET http://127.0.0.1:11434/api/tagslistsgranite4:350m(676 MB)POST /api/generatetogranite4:350mreturns a real completionmake run_cliexportsOLLAMA_BASE_URLand Langflow log confirmsSet default_fields for non-secret variable OLLAMA_BASE_URLTest plan
make run_clion a Podman host → container created, model pulled, Langflow starts wired upmake run_clion a Docker-only host → same path, engine auto-detected as dockermake run_cliwithollama servealready on:11434→ bootstrap detects it and skips the engine pathmake run_clion macOS without Docker/Podman/Ollama → autoinstalls Ollama via Homebrew, then startsollama servemake run_clion Linux without Docker/Podman/Ollama → fails loudly with install instructions (no silent pass-through)LANGFLOW_OLLAMA_OPTIONAL=1 make run_clion a host without anything → warns and continuesgranite4:350mis availabledocker compose upandpodman compose upindocker_example/ollama-granite/both bring the stack up end-to-endmake ollama_downcleans up the container under both docker and podman (without wiping the model volume)test_initialize_user_variables__create_and_update, etc.)Files
src/lfx/src/lfx/services/settings/constants.py— addsOLLAMA_BASE_URLto env-seed listsrc/lfx/src/lfx/base/models/ollama_constants.py— registersgranite4:350mMakefile—ollama_up/ollama_downtargets, wires intorun_cli/run_clicscripts/setup/ollama_bootstrap.sh— engine-agnostic bootstrapdocker_example/ollama-granite/{docker-compose.yml,README.md}— turnkey stackConfiguration knobs (env vars)
OLLAMA_PORT11434OLLAMA_HOST127.0.0.1LANGFLOW_OLLAMA_MODELgranite4:350mLANGFLOW_OLLAMA_ENGINEdockerorpodmanLANGFLOW_OLLAMA_NO_INSTALL0LANGFLOW_OLLAMA_OPTIONAL0🤖 Generated with Claude Code