updated dockerfile to support multi arch during build and publish and…#2
updated dockerfile to support multi arch during build and publish and…#2
Conversation
… added a verify to test it
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughAdds a GitHub Actions workflow at .github/workflows/verify.yml to build multi-arch Docker images (linux/amd64, linux/arm64) and run a docker-compose smoke test that waits for the Ollama server, prints its version, and calls the embeddings API. Updates the Dockerfile to replace an inline pre-pull loop with a TARGETARCH-aware entrypoint script that starts a temporary server, polls readiness, pre-pulls missing models, cleans up, and then execs the main Ollama server. Adds the new entrypoint.sh implementing strict error handling, readiness polling, model checks, and timeout/logging behavior. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/verify.yml:
- Around line 42-45: The workflow currently sets MODELS= empty so neither the
Dockerfile build-time pre-pull nor the entrypoint warm-up execute; update one of
the jobs in .github/workflows/verify.yml to set MODELS to at least one
lightweight model (e.g., a small public model name) so the Docker build-stage
that checks MODELS and the entrypoint path that warms models run during CI;
ensure the job that sets MODELS also retains OLLAMA_KEEP_ALIVE and uses the same
VERSION output so the rest of the workflow is unchanged.
- Line 37: In the GitHub Actions workflow verify.yml update the two occurrences
of the string "file: ./Dockerfile" to match the repository filename "file:
./dockerfile" (case-sensitive) so the Dockerfile path used by the workflow
matches the actual file; search for the literal "file: ./Dockerfile" in
verify.yml and replace both instances with "file: ./dockerfile".
In `@dockerfile`:
- Around line 23-26: The startup loop that retries "ollama list" (the for i in
1...10; do ollama list ...; sleep 1; done loop) must exit with an explicit
failure if the service never becomes reachable: after the retry loop check
whether the last "ollama list" succeeded and if not, print the relevant logs
(/tmp/ollama-serve.log and /tmp/ollama-entrypoint.log) to stdout/stderr and exit
non‑zero before attempting any "ollama pull" or model checks; make the same
change for the second retry block around line 39 so both loops bail out with
logs and a non‑zero exit when readiness is not achieved.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6a1636a2-fc89-4770-8b04-d81cb0fc1e79
📒 Files selected for processing (2)
.github/workflows/verify.ymldockerfile
.github/workflows/verify.yml
Outdated
| uses: docker/build-push-action@v6 | ||
| with: | ||
| context: . | ||
| file: ./Dockerfile |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
echo "Workflow references:"
rg -n 'file:\s+\./[Dd]ockerfile' .github/workflows/verify.yml
echo
echo "Dockerfile candidates in the repo:"
fd -HI -t f '^(Dockerfile|dockerfile)$' .Repository: appwrite/docker-ollama
Length of output: 199
Fix Dockerfile path casing in lines 37 and 70.
Workflow references ./Dockerfile but the actual file is ./dockerfile. On GitHub's case-sensitive Linux runners, this mismatch will fail the build. Update both occurrences to ./dockerfile to match the repository filename.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/verify.yml at line 37, In the GitHub Actions workflow
verify.yml update the two occurrences of the string "file: ./Dockerfile" to
match the repository filename "file: ./dockerfile" (case-sensitive) so the
Dockerfile path used by the workflow matches the actual file; search for the
literal "file: ./Dockerfile" in verify.yml and replace both instances with
"file: ./dockerfile".
…ld and entrypoint execution, ensuring models are pulled only when the service is ready. Updated GitHub Actions workflow for smoke testing with embeddinggemma model.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dockerfile`:
- Around line 23-37: The build-time readiness loop gives ollama only 10 seconds
(for i in 1..10) while the runtime entrypoint waits 20 seconds, causing valid
slow builds to fail; update the pre-pull readiness loop that checks "ollama
list" (and uses variables ready, pid, /tmp/ollama-serve.log) to wait the same
duration as runtime (e.g., extend the loop to 20 iterations or use the shared
timeout constant) so the build-time timeout matches the entrypoint's 20-second
wait.
- Line 51: The check using "ollama list | grep -qw \"$m\"" can produce false
positives (e.g., "llama3" matching "llama3.1:latest"); replace it with an exact
NAME-column match: pipe "ollama list" through awk to emit only the first (NAME)
column (skip header) and use grep -Fxq to test the exact model name, and if the
requested model has no tag (no ":"), also fall back to testing the exact
"$m:latest" name; update the ENTRYPOINT pipeline where models are iterated (the
for m in "${models[@]}"; do ...; done block) to use this precise NAME-column
equality instead of grep -w.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 734636c4-74f8-4f4a-a6eb-b3ef3eb0a3ce
📒 Files selected for processing (2)
.github/workflows/verify.ymldockerfile
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dockerfile`:
- Line 51: The ENTRYPOINT ends with exec ollama serve which drops any docker
run/Compose runtime arguments; update the final exec invocation in the long bash
command to forward user args by adding a "--" then "$@" so that runtime
arguments reach ollama (i.e., change the final token from exec ollama serve to
exec ollama serve -- "$@"); reference the ENTRYPOINT array and the ollama serve
invocation to locate and modify the command.
dockerfile
Outdated
|
|
||
| # On container start, quickly ensure models exist (no re-download unless missing) | ||
| ENTRYPOINT ["/bin/bash", "-c", "read -r -a models <<< \"$MODELS\"; for m in \"${models[@]}\"; do ollama list | grep -qw \"$m\" || ollama pull \"$m\" || exit 1; done && exec ollama serve"] No newline at end of file | ||
| ENTRYPOINT ["/bin/bash", "-lc", "set -euo pipefail; if [[ -n \"${MODELS:-}\" ]]; then read -r -a models <<< \"${MODELS}\"; ollama serve >/tmp/ollama-entrypoint.log 2>&1 & pid=$!; ready=0; for i in {1..20}; do if ollama list >/dev/null 2>&1; then ready=1; break; fi; sleep 1; done; if [[ \"$ready\" -ne 1 ]]; then echo 'ERROR: ollama did not become ready during entrypoint pre-pull' >&2; echo '--- /tmp/ollama-entrypoint.log ---' >&2; cat /tmp/ollama-entrypoint.log >&2 || true; kill \"$pid\" || true; wait \"$pid\" || true; exit 1; fi; for m in \"${models[@]}\"; do if ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m\"; then :; elif [[ \"$m\" != *:* ]] && ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m:latest\"; then :; else ollama pull \"$m\"; fi; done; kill \"$pid\"; wait \"$pid\" || true; fi; exec ollama serve"] No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
bash -lc 'printf "without placeholder -> 0=%s | @=[%s]\n" "$0" "$*"' foo bar baz
bash -lc 'printf "with placeholder -> 0=%s | @=[%s]\n" "$0" "$*"' -- foo bar bazRepository: appwrite/docker-ollama
Length of output: 369
🏁 Script executed:
fd -t f dockerfile -o exec wc -l {} \;Repository: appwrite/docker-ollama
Length of output: 193
🏁 Script executed:
cat -n dockerfile | sed -n '45,55p'Repository: appwrite/docker-ollama
Length of output: 1076
Forward runtime arguments to ollama serve.
Line 51 ends with exec ollama serve without "$@". When using ENTRYPOINT ["/bin/bash", "-lc", ...], any docker run image <args> or Compose command: values are consumed by Bash as positional parameters and never reach Ollama. This breaks runtime overrides.
Fix
-...; fi; exec ollama serve"]
+...; fi; exec ollama serve \"$@\"", "--"]The extra "--" becomes Bash's $0, keeping all user-supplied arguments in "$@" for ollama serve.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ENTRYPOINT ["/bin/bash", "-lc", "set -euo pipefail; if [[ -n \"${MODELS:-}\" ]]; then read -r -a models <<< \"${MODELS}\"; ollama serve >/tmp/ollama-entrypoint.log 2>&1 & pid=$!; ready=0; for i in {1..20}; do if ollama list >/dev/null 2>&1; then ready=1; break; fi; sleep 1; done; if [[ \"$ready\" -ne 1 ]]; then echo 'ERROR: ollama did not become ready during entrypoint pre-pull' >&2; echo '--- /tmp/ollama-entrypoint.log ---' >&2; cat /tmp/ollama-entrypoint.log >&2 || true; kill \"$pid\" || true; wait \"$pid\" || true; exit 1; fi; for m in \"${models[@]}\"; do if ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m\"; then :; elif [[ \"$m\" != *:* ]] && ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m:latest\"; then :; else ollama pull \"$m\"; fi; done; kill \"$pid\"; wait \"$pid\" || true; fi; exec ollama serve"] | |
| ENTRYPOINT ["/bin/bash", "-lc", "set -euo pipefail; if [[ -n \"${MODELS:-}\" ]]; then read -r -a models <<< \"${MODELS}\"; ollama serve >/tmp/ollama-entrypoint.log 2>&1 & pid=$!; ready=0; for i in {1..20}; do if ollama list >/dev/null 2>&1; then ready=1; break; fi; sleep 1; done; if [[ \"$ready\" -ne 1 ]]; then echo 'ERROR: ollama did not become ready during entrypoint pre-pull' >&2; echo '--- /tmp/ollama-entrypoint.log ---' >&2; cat /tmp/ollama-entrypoint.log >&2 || true; kill \"$pid\" || true; wait \"$pid\" || true; exit 1; fi; for m in \"${models[@]}\"; do if ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m\"; then :; elif [[ \"$m\" != *:* ]] && ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m:latest\"; then :; else ollama pull \"$m\"; fi; done; kill \"$pid\"; wait \"$pid\" || true; fi; exec ollama serve \"$@\"", "--"] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dockerfile` at line 51, The ENTRYPOINT ends with exec ollama serve which
drops any docker run/Compose runtime arguments; update the final exec invocation
in the long bash command to forward user args by adding a "--" then "$@" so that
runtime arguments reach ollama (i.e., change the final token from exec ollama
serve to exec ollama serve -- "$@"); reference the ENTRYPOINT array and the
ollama serve invocation to locate and modify the command.
dockerfile
Outdated
|
|
||
| # On container start, quickly ensure models exist (no re-download unless missing) | ||
| ENTRYPOINT ["/bin/bash", "-c", "read -r -a models <<< \"$MODELS\"; for m in \"${models[@]}\"; do ollama list | grep -qw \"$m\" || ollama pull \"$m\" || exit 1; done && exec ollama serve"] No newline at end of file | ||
| ENTRYPOINT ["/bin/bash", "-lc", "set -euo pipefail; if [[ -n \"${MODELS:-}\" ]]; then read -r -a models <<< \"${MODELS}\"; ollama serve >/tmp/ollama-entrypoint.log 2>&1 & pid=$!; ready=0; for i in {1..20}; do if ollama list >/dev/null 2>&1; then ready=1; break; fi; sleep 1; done; if [[ \"$ready\" -ne 1 ]]; then echo 'ERROR: ollama did not become ready during entrypoint pre-pull' >&2; echo '--- /tmp/ollama-entrypoint.log ---' >&2; cat /tmp/ollama-entrypoint.log >&2 || true; kill \"$pid\" || true; wait \"$pid\" || true; exit 1; fi; for m in \"${models[@]}\"; do if ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m\"; then :; elif [[ \"$m\" != *:* ]] && ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m:latest\"; then :; else ollama pull \"$m\"; fi; done; kill \"$pid\"; wait \"$pid\" || true; fi; exec ollama serve"] No newline at end of file |
There was a problem hiding this comment.
Let's use a custom script for this
There was a problem hiding this comment.
♻️ Duplicate comments (1)
entrypoint.sh (1)
52-52:⚠️ Potential issue | 🟠 MajorPreserve runtime command overrides.
Line 52 always replaces the container command with
ollama serve, sodocker run image <args>and Composecommand:values never reach Ollama after switching to a custom entrypoint.💡 Minimal fix
-exec ollama serve +if [[ $# -gt 0 ]]; then + exec ollama "$@" +fi + +exec ollama serve🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@entrypoint.sh` at line 52, The entrypoint currently hardcodes "exec ollama serve", which prevents runtime command overrides; change that line so the entrypoint executes "ollama" with the container's runtime arguments (use shell positional parameters "$@") and fall back to "serve" when no arguments are provided (use parameter expansion to provide a default). Locate the literal exec ollama serve in the script and replace it accordingly so docker run <image> <args> or Compose command: values are forwarded to Ollama while keeping "serve" as the default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@entrypoint.sh`:
- Line 52: The entrypoint currently hardcodes "exec ollama serve", which
prevents runtime command overrides; change that line so the entrypoint executes
"ollama" with the container's runtime arguments (use shell positional parameters
"$@") and fall back to "serve" when no arguments are provided (use parameter
expansion to provide a default). Locate the literal exec ollama serve in the
script and replace it accordingly so docker run <image> <args> or Compose
command: values are forwarded to Ollama while keeping "serve" as the default.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 01d49118-0bae-4762-a87a-6b7fad801b50
📒 Files selected for processing (2)
dockerfileentrypoint.sh
… added a verify to test it
What does this PR do?
(Provide a description of what this PR does.)
Test Plan
(Write your test plan here. If you changed any code, please provide us with clear instructions on how you verified your changes work.)
Related PRs and Issues
(If this PR is related to any other PR or resolves any issue or related to any issue link all related PR and issues here.)
Have you read the Contributing Guidelines on issues?
(Write your answer here.)
Summary by CodeRabbit
New Features
Tests