Update gptoss-fp4-b200-vllm vLLM image to v0.20.2#1333
Conversation
…-authored-by: Klaud Cold <Klaud-Cold@users.noreply.github.com>
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your single node PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow As a rule of thumb, generally, PR authors should request a review & get a PR approval from the respective companies' CODEOWNERS before requesting a review from core maintainers. If additional help is needed, PR authors can reach out to core maintainers over Slack. |
There was a problem hiding this comment.
LGTM — straightforward image version bump with matching changelog entry.
Extended reasoning...
Overview
This PR makes two minimal changes: bumps the gptoss-fp4-b200-vllm image from vllm/vllm-openai:v0.15.1 to vllm/vllm-openai:v0.20.2 in .github/configs/nvidia-master.yaml, and adds a corresponding entry to perf-changelog.yaml documenting the bump. Both changes are config/metadata only.
Security risks
None. These are YAML config edits to a benchmarks/recipes registry. There is no executable code, no auth/crypto/permissions change, and the new image tag points to an upstream public vLLM release.
Level of scrutiny
Low. This matches the established pattern for image version bumps in this repo (similar to commit 7afb3e9 already on main, and many prior changelog entries above 2345 follow the same shape). The change is mechanical and self-contained.
Other factors
The bug hunter flagged a nit — the changelog pr-link uses a literal XXX placeholder rather than 1333. That is metadata-only and is already surfaced as an inline comment for the author to fix before merge; it does not affect runtime behavior, so it is not blocking for approval.
| - gptoss-fp4-b200-vllm | ||
| description: | ||
| - "Update vLLM image from v0.15.1 to v0.20.2" | ||
| pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXX |
There was a problem hiding this comment.
🟡 The new perf-changelog.yaml entry uses a literal 'XXX' placeholder in its pr-link (https://github.com/SemiAnalysisAI/InferenceX/pull/XXX) instead of the actual PR number. Since this is PR #1333, the link should be .../pull/1333; as written, anyone following the changelog cross-reference will hit a 404. Fix by replacing XXX with 1333.
Extended reasoning...
What the bug is\n\nThe diff adds a new entry to perf-changelog.yaml at line 2346–2351 documenting the vLLM image bump for gptoss-fp4-b200-vllm. The final field is:\n\nyaml\n pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXX\n\n\nThe XXX is a literal placeholder string that was never substituted with the real PR number. This PR is #1333, so the URL should be https://github.com/SemiAnalysisAI/InferenceX/pull/1333.\n\nWhy existing code doesn'''t prevent it\n\nThe changelog is a free-form YAML document; nothing in the repo validates that pr-link URLs point to existing PRs, and YAML syntax accepts the placeholder string just fine. The convention is enforced socially — every other recent entry (lines 2332 → /pull/1304, 2338 → /pull/1305, 2345 → /pull/1310, etc.) uses the real PR number, so this one stands out as an oversight rather than a structural failure.\n\nImpact\n\nThis is metadata only and has no runtime effect — benchmarks, image selection, and CI all ignore the changelog. The harm is purely to changelog traceability: any reader (engineer, release manager, future PR author) who clicks through to understand the context behind the v0.15.1 → v0.20.2 vLLM bump will land on a GitHub 404, since /pull/XXX is not a valid PR. The change history breaks at exactly the point where it'''s being recorded.\n\nHow to fix\n\nReplace XXX with 1333 on line 2351:\n\nyaml\n pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1333\n\n\nStep-by-step proof\n\n1. Open the PR in GitHub — the URL bar shows .../pull/1333, so this PR'''s number is 1333.\n2. View the diff for perf-changelog.yaml. The added block ends with: pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXX.\n3. Construct the resulting URL: https://github.com/SemiAnalysisAI/InferenceX/pull/XXX. XXX is not a valid integer PR identifier, so GitHub returns a 404 Not Found.\n4. Compare against the entry immediately above (lines 2343–2345): pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1310 — a real PR number, which resolves correctly.\n5. The correct value for this entry, by the same convention, is 1333. Replacing XXX → 1333 restores changelog traceability.\n\nSeverity\n\nNit — the placeholder is clearly an unintentional oversight (a sed/replace step that was skipped) and trivially fixable before merge, but it does not affect runtime behavior.
|
see unofficial run visualizer at https://inferencex.semianalysis.com/inference?unofficialRun=25763432241 |
Summary
gptoss-fp4-b200-vllmimage fromvllm/vllm-openai:v0.15.1tovllm/vllm-openai:v0.20.2Ref #1154
Generated with Claude Code