Skip to content

updated dockerfile to support multi arch during build and publish and…#2

Merged
abnegate merged 7 commits intomainfrom
feat-dockerfile
Mar 11, 2026
Merged

updated dockerfile to support multi arch during build and publish and…#2
abnegate merged 7 commits intomainfrom
feat-dockerfile

Conversation

@ArnabChatterjee20k
Copy link
Contributor

@ArnabChatterjee20k ArnabChatterjee20k commented Mar 11, 2026

… added a verify to test it

What does this PR do?

(Provide a description of what this PR does.)

Test Plan

(Write your test plan here. If you changed any code, please provide us with clear instructions on how you verified your changes work.)

Related PRs and Issues

(If this PR is related to any other PR or resolves any issue or related to any issue link all related PR and issues here.)

Have you read the Contributing Guidelines on issues?

(Write your answer here.)

Summary by CodeRabbit

  • New Features

    • Multi-architecture Docker images for amd64 and arm64.
    • Improved container startup with architecture-aware model pre-pulling, readiness polling, and clearer failure logs.
  • Tests

    • New verification workflow that builds multi-arch images and runs smoke tests: boots services, checks server readiness/version, and validates embeddings.

@coderabbitai
Copy link

coderabbitai bot commented Mar 11, 2026

Warning

Rate limit exceeded

@ArnabChatterjee20k has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 18 minutes and 0 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0284968b-1088-4bbf-a04c-59aa14d41d9f

📥 Commits

Reviewing files that changed from the base of the PR and between f801dd6 and 2c86836.

📒 Files selected for processing (1)
  • .github/workflows/verify.yml

Walkthrough

Adds a GitHub Actions workflow at .github/workflows/verify.yml to build multi-arch Docker images (linux/amd64, linux/arm64) and run a docker-compose smoke test that waits for the Ollama server, prints its version, and calls the embeddings API. Updates the Dockerfile to replace an inline pre-pull loop with a TARGETARCH-aware entrypoint script that starts a temporary server, polls readiness, pre-pulls missing models, cleans up, and then execs the main Ollama server. Adds the new entrypoint.sh implementing strict error handling, readiness polling, model checks, and timeout/logging behavior.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title is incomplete and truncated (ends with 'and…'), making it vague and difficult to understand the full scope of changes. Complete the title to clearly describe all major changes. Consider: 'Add multi-arch build support and pre-pull entrypoint script for Ollama Docker image'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat-dockerfile

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/verify.yml:
- Around line 42-45: The workflow currently sets MODELS= empty so neither the
Dockerfile build-time pre-pull nor the entrypoint warm-up execute; update one of
the jobs in .github/workflows/verify.yml to set MODELS to at least one
lightweight model (e.g., a small public model name) so the Docker build-stage
that checks MODELS and the entrypoint path that warms models run during CI;
ensure the job that sets MODELS also retains OLLAMA_KEEP_ALIVE and uses the same
VERSION output so the rest of the workflow is unchanged.
- Line 37: In the GitHub Actions workflow verify.yml update the two occurrences
of the string "file: ./Dockerfile" to match the repository filename "file:
./dockerfile" (case-sensitive) so the Dockerfile path used by the workflow
matches the actual file; search for the literal "file: ./Dockerfile" in
verify.yml and replace both instances with "file: ./dockerfile".

In `@dockerfile`:
- Around line 23-26: The startup loop that retries "ollama list" (the for i in
1...10; do ollama list ...; sleep 1; done loop) must exit with an explicit
failure if the service never becomes reachable: after the retry loop check
whether the last "ollama list" succeeded and if not, print the relevant logs
(/tmp/ollama-serve.log and /tmp/ollama-entrypoint.log) to stdout/stderr and exit
non‑zero before attempting any "ollama pull" or model checks; make the same
change for the second retry block around line 39 so both loops bail out with
logs and a non‑zero exit when readiness is not achieved.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 6a1636a2-fc89-4770-8b04-d81cb0fc1e79

📥 Commits

Reviewing files that changed from the base of the PR and between 456d2aa and 3dc8c68.

📒 Files selected for processing (2)
  • .github/workflows/verify.yml
  • dockerfile

uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -e

echo "Workflow references:"
rg -n 'file:\s+\./[Dd]ockerfile' .github/workflows/verify.yml

echo
echo "Dockerfile candidates in the repo:"
fd -HI -t f '^(Dockerfile|dockerfile)$' .

Repository: appwrite/docker-ollama

Length of output: 199


Fix Dockerfile path casing in lines 37 and 70.

Workflow references ./Dockerfile but the actual file is ./dockerfile. On GitHub's case-sensitive Linux runners, this mismatch will fail the build. Update both occurrences to ./dockerfile to match the repository filename.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/verify.yml at line 37, In the GitHub Actions workflow
verify.yml update the two occurrences of the string "file: ./Dockerfile" to
match the repository filename "file: ./dockerfile" (case-sensitive) so the
Dockerfile path used by the workflow matches the actual file; search for the
literal "file: ./Dockerfile" in verify.yml and replace both instances with
"file: ./dockerfile".

…ld and entrypoint execution, ensuring models are pulled only when the service is ready. Updated GitHub Actions workflow for smoke testing with embeddinggemma model.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dockerfile`:
- Around line 23-37: The build-time readiness loop gives ollama only 10 seconds
(for i in 1..10) while the runtime entrypoint waits 20 seconds, causing valid
slow builds to fail; update the pre-pull readiness loop that checks "ollama
list" (and uses variables ready, pid, /tmp/ollama-serve.log) to wait the same
duration as runtime (e.g., extend the loop to 20 iterations or use the shared
timeout constant) so the build-time timeout matches the entrypoint's 20-second
wait.
- Line 51: The check using "ollama list | grep -qw \"$m\"" can produce false
positives (e.g., "llama3" matching "llama3.1:latest"); replace it with an exact
NAME-column match: pipe "ollama list" through awk to emit only the first (NAME)
column (skip header) and use grep -Fxq to test the exact model name, and if the
requested model has no tag (no ":"), also fall back to testing the exact
"$m:latest" name; update the ENTRYPOINT pipeline where models are iterated (the
for m in "${models[@]}"; do ...; done block) to use this precise NAME-column
equality instead of grep -w.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 734636c4-74f8-4f4a-a6eb-b3ef3eb0a3ce

📥 Commits

Reviewing files that changed from the base of the PR and between 456d2aa and 1c4f727.

📒 Files selected for processing (2)
  • .github/workflows/verify.yml
  • dockerfile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dockerfile`:
- Line 51: The ENTRYPOINT ends with exec ollama serve which drops any docker
run/Compose runtime arguments; update the final exec invocation in the long bash
command to forward user args by adding a "--" then "$@" so that runtime
arguments reach ollama (i.e., change the final token from exec ollama serve to
exec ollama serve -- "$@"); reference the ENTRYPOINT array and the ollama serve
invocation to locate and modify the command.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4a7600b4-e724-406a-adfd-956bb52eea5f

📥 Commits

Reviewing files that changed from the base of the PR and between 1c4f727 and 7710d69.

📒 Files selected for processing (1)
  • dockerfile

dockerfile Outdated

# On container start, quickly ensure models exist (no re-download unless missing)
ENTRYPOINT ["/bin/bash", "-c", "read -r -a models <<< \"$MODELS\"; for m in \"${models[@]}\"; do ollama list | grep -qw \"$m\" || ollama pull \"$m\" || exit 1; done && exec ollama serve"] No newline at end of file
ENTRYPOINT ["/bin/bash", "-lc", "set -euo pipefail; if [[ -n \"${MODELS:-}\" ]]; then read -r -a models <<< \"${MODELS}\"; ollama serve >/tmp/ollama-entrypoint.log 2>&1 & pid=$!; ready=0; for i in {1..20}; do if ollama list >/dev/null 2>&1; then ready=1; break; fi; sleep 1; done; if [[ \"$ready\" -ne 1 ]]; then echo 'ERROR: ollama did not become ready during entrypoint pre-pull' >&2; echo '--- /tmp/ollama-entrypoint.log ---' >&2; cat /tmp/ollama-entrypoint.log >&2 || true; kill \"$pid\" || true; wait \"$pid\" || true; exit 1; fi; for m in \"${models[@]}\"; do if ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m\"; then :; elif [[ \"$m\" != *:* ]] && ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m:latest\"; then :; else ollama pull \"$m\"; fi; done; kill \"$pid\"; wait \"$pid\" || true; fi; exec ollama serve"] No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

bash -lc 'printf "without placeholder -> 0=%s | @=[%s]\n" "$0" "$*"' foo bar baz
bash -lc 'printf "with placeholder    -> 0=%s | @=[%s]\n" "$0" "$*"' -- foo bar baz

Repository: appwrite/docker-ollama

Length of output: 369


🏁 Script executed:

fd -t f dockerfile -o exec wc -l {} \;

Repository: appwrite/docker-ollama

Length of output: 193


🏁 Script executed:

cat -n dockerfile | sed -n '45,55p'

Repository: appwrite/docker-ollama

Length of output: 1076


Forward runtime arguments to ollama serve.

Line 51 ends with exec ollama serve without "$@". When using ENTRYPOINT ["/bin/bash", "-lc", ...], any docker run image <args> or Compose command: values are consumed by Bash as positional parameters and never reach Ollama. This breaks runtime overrides.

Fix
-...; fi; exec ollama serve"]
+...; fi; exec ollama serve \"$@\"", "--"]

The extra "--" becomes Bash's $0, keeping all user-supplied arguments in "$@" for ollama serve.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ENTRYPOINT ["/bin/bash", "-lc", "set -euo pipefail; if [[ -n \"${MODELS:-}\" ]]; then read -r -a models <<< \"${MODELS}\"; ollama serve >/tmp/ollama-entrypoint.log 2>&1 & pid=$!; ready=0; for i in {1..20}; do if ollama list >/dev/null 2>&1; then ready=1; break; fi; sleep 1; done; if [[ \"$ready\" -ne 1 ]]; then echo 'ERROR: ollama did not become ready during entrypoint pre-pull' >&2; echo '--- /tmp/ollama-entrypoint.log ---' >&2; cat /tmp/ollama-entrypoint.log >&2 || true; kill \"$pid\" || true; wait \"$pid\" || true; exit 1; fi; for m in \"${models[@]}\"; do if ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m\"; then :; elif [[ \"$m\" != *:* ]] && ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m:latest\"; then :; else ollama pull \"$m\"; fi; done; kill \"$pid\"; wait \"$pid\" || true; fi; exec ollama serve"]
ENTRYPOINT ["/bin/bash", "-lc", "set -euo pipefail; if [[ -n \"${MODELS:-}\" ]]; then read -r -a models <<< \"${MODELS}\"; ollama serve >/tmp/ollama-entrypoint.log 2>&1 & pid=$!; ready=0; for i in {1..20}; do if ollama list >/dev/null 2>&1; then ready=1; break; fi; sleep 1; done; if [[ \"$ready\" -ne 1 ]]; then echo 'ERROR: ollama did not become ready during entrypoint pre-pull' >&2; echo '--- /tmp/ollama-entrypoint.log ---' >&2; cat /tmp/ollama-entrypoint.log >&2 || true; kill \"$pid\" || true; wait \"$pid\" || true; exit 1; fi; for m in \"${models[@]}\"; do if ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m\"; then :; elif [[ \"$m\" != *:* ]] && ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m:latest\"; then :; else ollama pull \"$m\"; fi; done; kill \"$pid\"; wait \"$pid\" || true; fi; exec ollama serve \"$@\"", "--"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dockerfile` at line 51, The ENTRYPOINT ends with exec ollama serve which
drops any docker run/Compose runtime arguments; update the final exec invocation
in the long bash command to forward user args by adding a "--" then "$@" so that
runtime arguments reach ollama (i.e., change the final token from exec ollama
serve to exec ollama serve -- "$@"); reference the ENTRYPOINT array and the
ollama serve invocation to locate and modify the command.

dockerfile Outdated

# On container start, quickly ensure models exist (no re-download unless missing)
ENTRYPOINT ["/bin/bash", "-c", "read -r -a models <<< \"$MODELS\"; for m in \"${models[@]}\"; do ollama list | grep -qw \"$m\" || ollama pull \"$m\" || exit 1; done && exec ollama serve"] No newline at end of file
ENTRYPOINT ["/bin/bash", "-lc", "set -euo pipefail; if [[ -n \"${MODELS:-}\" ]]; then read -r -a models <<< \"${MODELS}\"; ollama serve >/tmp/ollama-entrypoint.log 2>&1 & pid=$!; ready=0; for i in {1..20}; do if ollama list >/dev/null 2>&1; then ready=1; break; fi; sleep 1; done; if [[ \"$ready\" -ne 1 ]]; then echo 'ERROR: ollama did not become ready during entrypoint pre-pull' >&2; echo '--- /tmp/ollama-entrypoint.log ---' >&2; cat /tmp/ollama-entrypoint.log >&2 || true; kill \"$pid\" || true; wait \"$pid\" || true; exit 1; fi; for m in \"${models[@]}\"; do if ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m\"; then :; elif [[ \"$m\" != *:* ]] && ollama list | awk 'NR>1 {print $1}' | grep -Fxq \"$m:latest\"; then :; else ollama pull \"$m\"; fi; done; kill \"$pid\"; wait \"$pid\" || true; fi; exec ollama serve"] No newline at end of file
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use a custom script for this

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
entrypoint.sh (1)

52-52: ⚠️ Potential issue | 🟠 Major

Preserve runtime command overrides.

Line 52 always replaces the container command with ollama serve, so docker run image <args> and Compose command: values never reach Ollama after switching to a custom entrypoint.

💡 Minimal fix
-exec ollama serve
+if [[ $# -gt 0 ]]; then
+  exec ollama "$@"
+fi
+
+exec ollama serve
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@entrypoint.sh` at line 52, The entrypoint currently hardcodes "exec ollama
serve", which prevents runtime command overrides; change that line so the
entrypoint executes "ollama" with the container's runtime arguments (use shell
positional parameters "$@") and fall back to "serve" when no arguments are
provided (use parameter expansion to provide a default). Locate the literal exec
ollama serve in the script and replace it accordingly so docker run <image>
<args> or Compose command: values are forwarded to Ollama while keeping "serve"
as the default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@entrypoint.sh`:
- Line 52: The entrypoint currently hardcodes "exec ollama serve", which
prevents runtime command overrides; change that line so the entrypoint executes
"ollama" with the container's runtime arguments (use shell positional parameters
"$@") and fall back to "serve" when no arguments are provided (use parameter
expansion to provide a default). Locate the literal exec ollama serve in the
script and replace it accordingly so docker run <image> <args> or Compose
command: values are forwarded to Ollama while keeping "serve" as the default.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 01d49118-0bae-4762-a87a-6b7fad801b50

📥 Commits

Reviewing files that changed from the base of the PR and between 7710d69 and f801dd6.

📒 Files selected for processing (2)
  • dockerfile
  • entrypoint.sh

@abnegate abnegate merged commit 5b9b7cc into main Mar 11, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants