Skip to content

feat: bump llama-stack to 0.2.22#39

Merged
leseb merged 1 commit intoopendatahub-io:mainfrom
leseb:bump-lls
Sep 18, 2025
Merged

feat: bump llama-stack to 0.2.22#39
leseb merged 1 commit intoopendatahub-io:mainfrom
leseb:bump-lls

Conversation

@leseb
Copy link
Copy Markdown
Collaborator

@leseb leseb commented Sep 18, 2025

https://github.com/llamastack/llama-stack/releases/tag/v0.2.22

Summary by CodeRabbit

  • Chores
    • Updated the llama-stack dependency to 0.2.22 across build scripts, distribution assets, and pre-commit hooks.
    • Adjusted build/version checks to expect llama-stack 0.2.22; builds will fail if versions mismatch.
    • Removed the OpenAI package from the distribution container image to streamline runtime dependencies.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Sep 18, 2025

Walkthrough

Bumps llama-stack from 0.2.21 to 0.2.22 across pre-commit, distribution container specs, and the build script; removes the OpenAI package from the initial pip install list in distribution/Containerfile. No public APIs changed.

Changes

Cohort / File(s) Summary
Pre-commit hook version bump
.pre-commit-config.yaml
Update local hook dependency to llama-stack==0.2.22 for the pkg-gen step.
Distribution container specs
distribution/Containerfile, distribution/Containerfile.in
Bump llama-stack to 0.2.22; remove the OpenAI package from the initial pip install list in distribution/Containerfile. No other setup changes.
Build script version gate
distribution/build.py
Update BASE_REQUIREMENTS to llama-stack==0.2.22; existing version-check logic unchanged (now expects 0.2.22).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

I nibble lines of version lore,
From .21 to .22 I hop once more.
One package trimmed, the image light—
Builds hum softly through the night.
A tiny hop, a tidy cheer, carrot-colored code near. 🥕

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "feat: bump llama-stack to 0.2.22" is concise and directly describes the primary change in the PR — updating the llama-stack dependency to 0.2.22 across the repository (e.g., .pre-commit-config.yaml, distribution/Containerfile(.in), and distribution/build.py), so it is clear and relevant for someone scanning history.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0d67f74 and 4e645d3.

📒 Files selected for processing (4)
  • .pre-commit-config.yaml (1 hunks)
  • distribution/Containerfile (1 hunks)
  • distribution/Containerfile.in (1 hunks)
  • distribution/build.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • .pre-commit-config.yaml
  • distribution/build.py
  • distribution/Containerfile.in
  • distribution/Containerfile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-test-push (linux/amd64)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@leseb
Copy link
Copy Markdown
Collaborator Author

leseb commented Sep 18, 2025

@coderabbitai ignore

Copy link
Copy Markdown
Collaborator

@Elbehery Elbehery left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Sep 18, 2025

✅ Actions performed

Reviews paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
distribution/build.py (1)

73-93: Translate uv --no-cache to pip --no-cache-dir in generation

uv's "--no-cache" is not a valid pip flag; the generator currently emits it into the Containerfile.

  • Change distribution/build.py (lines ~73–93): when handling "--no-cache", replace the token in packages with "--no-cache-dir" before joining and append the resulting command to no_cache.
  • Update template/generation artifacts: distribution/Containerfile.in (line 6) and any generated distribution/Containerfile entries (e.g., line ~46) that currently contain "RUN pip install --no-cache ..." to use "--no-cache-dir" (or let the fixed generator overwrite them).
  • Re-run the generator after installing the llama CLI (python3 distribution/build.py) and verify no "pip install --no-cache" remains in distribution/Containerfile.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b535695 and 0d67f74.

📒 Files selected for processing (4)
  • .pre-commit-config.yaml (1 hunks)
  • distribution/Containerfile (1 hunks)
  • distribution/Containerfile.in (1 hunks)
  • distribution/build.py (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: nathan-weinberg
PR: opendatahub-io/llama-stack-distribution#33
File: distribution/Containerfile:17-21
Timestamp: 2025-09-15T14:25:54.837Z
Learning: In the opendatahub-io/llama-stack-distribution repository, the distribution/Containerfile is auto-generated by distribution/build.py based on configuration in build.yaml. When providers are added to build.yaml, the build script automatically regenerates the Containerfile with the required dependencies. Changes to the Containerfile should not be flagged as manual edits if they correspond to legitimate changes in the build configuration.
📚 Learning: 2025-09-15T14:25:54.837Z
Learnt from: nathan-weinberg
PR: opendatahub-io/llama-stack-distribution#33
File: distribution/Containerfile:17-21
Timestamp: 2025-09-15T14:25:54.837Z
Learning: In the opendatahub-io/llama-stack-distribution repository, the distribution/Containerfile is auto-generated by distribution/build.py based on configuration in build.yaml. When providers are added to build.yaml, the build script automatically regenerates the Containerfile with the required dependencies. Changes to the Containerfile should not be flagged as manual edits if they correspond to legitimate changes in the build configuration.

Applied to files:

  • .pre-commit-config.yaml
  • distribution/Containerfile.in
  • distribution/Containerfile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-test-push (linux/amd64)
🔇 Additional comments (2)
distribution/build.py (1)

16-16: LGTM: bump to 0.2.22.

.pre-commit-config.yaml (1)

59-59: LGTM — version bump verified

llama-stack==0.2.22 is present on PyPI and repository pins are consistent (distribution/build.py, distribution/Containerfile.in, distribution/Containerfile).

RUN pip install --extra-index-url https://download.pytorch.org/whl/cpu torch torchao>=0.12.0 torchvision
RUN pip install --no-deps sentence-transformers
RUN pip install --no-cache llama-stack==0.2.21
RUN pip install --no-cache llama-stack==0.2.22
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

❓ Verification inconclusive

Same pip flag issue here: switch to --no-cache-dir.

This line will error under pip; align with pip’s flag.

-RUN pip install --no-cache llama-stack==0.2.22
+RUN pip install --no-cache-dir llama-stack==0.2.22

Note: Containerfile is auto-generated; after fixing build.py/template, re-generate to propagate the correct flag. Also confirm the removal of the OpenAI package is intentional per distribution/build.yaml (expected if providers changed).


Replace invalid pip flag (--no-cache) with --no-cache-dir.
pip's CLI uses --no-cache-dir; --no-cache is invalid and will error.
File: distribution/Containerfile (line 46)

-RUN pip install --no-cache llama-stack==0.2.22
+RUN pip install --no-cache-dir llama-stack==0.2.22

Containerfile is autogenerated — update the generator/template (distribution/build.py / build.yaml) and re-generate. Confirm the removal of the OpenAI package in distribution/build.yaml is intentional.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
RUN pip install --no-cache llama-stack==0.2.22
RUN pip install --no-cache-dir llama-stack==0.2.22
🤖 Prompt for AI Agents
In distribution/Containerfile around line 46, the Dockerfile uses an invalid pip
flag (--no-cache); replace it with the correct flag --no-cache-dir in the
generated Containerfile by updating the template/generator (modify
distribution/build.py or distribution/build.yaml where the pip install line is
produced), then re-generate the Containerfile; while updating, verify whether
the OpenAI package removal in distribution/build.yaml was intentional and
restore it if necessary before re-generating.

RUN pip install sqlalchemy # somehow sqlalchemy[asyncio] is not sufficient
{dependencies}
RUN pip install --no-cache llama-stack==0.2.21
RUN pip install --no-cache llama-stack==0.2.22
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix invalid pip flag: use --no-cache-dir (not --no-cache).

Docker build will fail because pip does not support --no-cache. Replace with --no-cache-dir.

-RUN pip install --no-cache llama-stack==0.2.22
+RUN pip install --no-cache-dir llama-stack==0.2.22
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
RUN pip install --no-cache llama-stack==0.2.22
RUN pip install --no-cache-dir llama-stack==0.2.22
🤖 Prompt for AI Agents
In distribution/Containerfile.in around line 6, the RUN command uses an invalid
pip flag (--no-cache); replace it with the correct --no-cache-dir flag so the
Docker build won't fail. Update the pip install invocation to use --no-cache-dir
llama-stack==0.2.22 (keeping the package and version unchanged).

@leseb
Copy link
Copy Markdown
Collaborator Author

leseb commented Sep 18, 2025

@derekhiggins any idea on the test failures? thanks

https://github.com/llamastack/llama-stack/releases/tag/v0.2.22

Relates to: RHAIENG-1124
Signed-off-by: Sébastien Han <seb@redhat.com>
Copy link
Copy Markdown
Collaborator

@derekhiggins derekhiggins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@leseb leseb merged commit f222f3b into opendatahub-io:main Sep 18, 2025
5 checks passed
@leseb leseb deleted the bump-lls branch September 18, 2025 14:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants