Skip to content

feat: add Containerfile for building vllm CPU images #4

feat: add Containerfile for building vllm CPU images

feat: add Containerfile for building vllm CPU images #4

Triggered via pull request January 14, 2026 19:00
Status Failure
Total duration 1m 1s
Artifacts 1

vllm-cpu-container.yml

on: pull_request
Matrix: build-test-push
Fit to window
Zoom out
Zoom in

Annotations

1 error and 1 warning
build-test-push (linux/amd64)
buildx failed with: ERROR: failed to build: failed to solve: process "/bin/sh -c if [ -z \"${INFERENCE_MODEL}\" ]; then echo \"ERROR: INFERENCE_MODEL build argument is required\" >&2 && exit 1; fi && if [ -z \"${EMBEDDING_MODEL}\" ]; then echo \"ERROR: EMBEDDING_MODEL build argument is required\" >&2 && exit 1; fi" did not complete successfully: exit code: 1
build-test-push (linux/amd64)
No file matched to [/home/runner/work/llama-stack-distribution/llama-stack-distribution/**/*requirements*.txt,/home/runner/work/llama-stack-distribution/llama-stack-distribution/**/*requirements*.in,/home/runner/work/llama-stack-distribution/llama-stack-distribution/**/*constraints*.txt,/home/runner/work/llama-stack-distribution/llama-stack-distribution/**/*constraints*.in,/home/runner/work/llama-stack-distribution/llama-stack-distribution/**/pyproject.toml,/home/runner/work/llama-stack-distribution/llama-stack-distribution/**/uv.lock,/home/runner/work/llama-stack-distribution/llama-stack-distribution/**/*.py.lock]. The cache will never get invalidated. Make sure you have checked out the target repository and configured the cache-dependency-glob input correctly.

Artifacts

Produced during runtime
Name Size Digest
opendatahub-io~llama-stack-distribution~G0SD95.dockerbuild
32.4 KB
sha256:8fe9479f2f45d4d97db5d4c75023b73df1f8241af0553426a6f8fa9a974db915