feat: bump llama-stack to 0.2.22#39
Conversation
WalkthroughBumps llama-stack from 0.2.21 to 0.2.22 across pre-commit, distribution container specs, and the build script; removes the OpenAI package from the initial pip install list in Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (4)
🚧 Files skipped from review as they are similar to previous changes (4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai ignore |
✅ Actions performedReviews paused. |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
distribution/build.py (1)
73-93: Translate uv --no-cache to pip --no-cache-dir in generationuv's "--no-cache" is not a valid pip flag; the generator currently emits it into the Containerfile.
- Change distribution/build.py (lines ~73–93): when handling "--no-cache", replace the token in packages with "--no-cache-dir" before joining and append the resulting command to no_cache.
- Update template/generation artifacts: distribution/Containerfile.in (line 6) and any generated distribution/Containerfile entries (e.g., line ~46) that currently contain "RUN pip install --no-cache ..." to use "--no-cache-dir" (or let the fixed generator overwrite them).
- Re-run the generator after installing the llama CLI (python3 distribution/build.py) and verify no "pip install --no-cache" remains in distribution/Containerfile.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.pre-commit-config.yaml(1 hunks)distribution/Containerfile(1 hunks)distribution/Containerfile.in(1 hunks)distribution/build.py(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: nathan-weinberg
PR: opendatahub-io/llama-stack-distribution#33
File: distribution/Containerfile:17-21
Timestamp: 2025-09-15T14:25:54.837Z
Learning: In the opendatahub-io/llama-stack-distribution repository, the distribution/Containerfile is auto-generated by distribution/build.py based on configuration in build.yaml. When providers are added to build.yaml, the build script automatically regenerates the Containerfile with the required dependencies. Changes to the Containerfile should not be flagged as manual edits if they correspond to legitimate changes in the build configuration.
📚 Learning: 2025-09-15T14:25:54.837Z
Learnt from: nathan-weinberg
PR: opendatahub-io/llama-stack-distribution#33
File: distribution/Containerfile:17-21
Timestamp: 2025-09-15T14:25:54.837Z
Learning: In the opendatahub-io/llama-stack-distribution repository, the distribution/Containerfile is auto-generated by distribution/build.py based on configuration in build.yaml. When providers are added to build.yaml, the build script automatically regenerates the Containerfile with the required dependencies. Changes to the Containerfile should not be flagged as manual edits if they correspond to legitimate changes in the build configuration.
Applied to files:
.pre-commit-config.yamldistribution/Containerfile.indistribution/Containerfile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-test-push (linux/amd64)
🔇 Additional comments (2)
distribution/build.py (1)
16-16: LGTM: bump to 0.2.22..pre-commit-config.yaml (1)
59-59: LGTM — version bump verifiedllama-stack==0.2.22 is present on PyPI and repository pins are consistent (distribution/build.py, distribution/Containerfile.in, distribution/Containerfile).
| RUN pip install --extra-index-url https://download.pytorch.org/whl/cpu torch torchao>=0.12.0 torchvision | ||
| RUN pip install --no-deps sentence-transformers | ||
| RUN pip install --no-cache llama-stack==0.2.21 | ||
| RUN pip install --no-cache llama-stack==0.2.22 |
There was a problem hiding this comment.
❓ Verification inconclusive
Same pip flag issue here: switch to --no-cache-dir.
This line will error under pip; align with pip’s flag.
-RUN pip install --no-cache llama-stack==0.2.22
+RUN pip install --no-cache-dir llama-stack==0.2.22Note: Containerfile is auto-generated; after fixing build.py/template, re-generate to propagate the correct flag. Also confirm the removal of the OpenAI package is intentional per distribution/build.yaml (expected if providers changed).
Replace invalid pip flag (--no-cache) with --no-cache-dir.
pip's CLI uses --no-cache-dir; --no-cache is invalid and will error.
File: distribution/Containerfile (line 46)
-RUN pip install --no-cache llama-stack==0.2.22
+RUN pip install --no-cache-dir llama-stack==0.2.22Containerfile is autogenerated — update the generator/template (distribution/build.py / build.yaml) and re-generate. Confirm the removal of the OpenAI package in distribution/build.yaml is intentional.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN pip install --no-cache llama-stack==0.2.22 | |
| RUN pip install --no-cache-dir llama-stack==0.2.22 |
🤖 Prompt for AI Agents
In distribution/Containerfile around line 46, the Dockerfile uses an invalid pip
flag (--no-cache); replace it with the correct flag --no-cache-dir in the
generated Containerfile by updating the template/generator (modify
distribution/build.py or distribution/build.yaml where the pip install line is
produced), then re-generate the Containerfile; while updating, verify whether
the OpenAI package removal in distribution/build.yaml was intentional and
restore it if necessary before re-generating.
| RUN pip install sqlalchemy # somehow sqlalchemy[asyncio] is not sufficient | ||
| {dependencies} | ||
| RUN pip install --no-cache llama-stack==0.2.21 | ||
| RUN pip install --no-cache llama-stack==0.2.22 |
There was a problem hiding this comment.
Fix invalid pip flag: use --no-cache-dir (not --no-cache).
Docker build will fail because pip does not support --no-cache. Replace with --no-cache-dir.
-RUN pip install --no-cache llama-stack==0.2.22
+RUN pip install --no-cache-dir llama-stack==0.2.22📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN pip install --no-cache llama-stack==0.2.22 | |
| RUN pip install --no-cache-dir llama-stack==0.2.22 |
🤖 Prompt for AI Agents
In distribution/Containerfile.in around line 6, the RUN command uses an invalid
pip flag (--no-cache); replace it with the correct --no-cache-dir flag so the
Docker build won't fail. Update the pip install invocation to use --no-cache-dir
llama-stack==0.2.22 (keeping the package and version unchanged).
|
@derekhiggins any idea on the test failures? thanks |
https://github.com/llamastack/llama-stack/releases/tag/v0.2.22 Relates to: RHAIENG-1124 Signed-off-by: Sébastien Han <seb@redhat.com>
0d67f74 to
4e645d3
Compare
https://github.com/llamastack/llama-stack/releases/tag/v0.2.22
Summary by CodeRabbit