Skip to content

Enhance: ignore add nvidia.com/gpu when use vllm-cpu image#229

Open
yyzxw wants to merge 1 commit intollm-d-incubation:mainfrom
yyzxw:fix/remove-resource-if-cpu
Open

Enhance: ignore add nvidia.com/gpu when use vllm-cpu image#229
yyzxw wants to merge 1 commit intollm-d-incubation:mainfrom
yyzxw:fix/remove-resource-if-cpu

Conversation

@yyzxw
Copy link

@yyzxw yyzxw commented Mar 3, 2026

Copilot AI review requested due to automatic review settings March 3, 2026 01:57
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the Helm chart’s accelerator resource resolution so that GPU resources (e.g., nvidia.com/gpu) are not automatically added when a CPU-only vLLM image is used.

Changes:

  • Adjust llm-d-modelservice.acceleratorResource to return no accelerator resource when the container image name suggests CPU usage (in addition to the existing inference-sim exception).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Signed-off-by: zxw <1020938856@qq.com>
@yyzxw yyzxw force-pushed the fix/remove-resource-if-cpu branch from 93ffc27 to b00a2e3 Compare March 3, 2026 02:08
@kalantar
Copy link
Collaborator

kalantar commented Mar 3, 2026

Please update the issue with more information about this image and why it should be an exception.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Do not generate nvidia.com/gpu when using vllm-cpu image

3 participants