Skip to content

[Version] Auto generate version using setuptool_scm#1224

Open
tjtanaa wants to merge 4 commits intovllm-project:mainfrom
EmbeddedLLM:setup_scm_version
Open

[Version] Auto generate version using setuptool_scm#1224
tjtanaa wants to merge 4 commits intovllm-project:mainfrom
EmbeddedLLM:setup_scm_version

Conversation

@tjtanaa
Copy link
Contributor

@tjtanaa tjtanaa commented Feb 5, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

This PR uses setuptool_scm to auto generate version tag based on the github release tag.

Following upstream, the vllm-omni version will also contains platform tag e.g.

For release version

vllm-omni-0.15.0rc2+cuda (on cuda)
vllm-omni-0.15.0rc2+rocm (on rocm)
vllm-omni-0.15.0rc2+npu (on npu)
vllm-omni-0.15.0rc2+xpu (on xpu)

For Dev version

vllm-omni-0.15.0rc2.dev22+gc779d43af.cuda (on cuda)
vllm-omni-0.15.0rc2.dev22+gc779d43af.rocm (on rocm)
vllm-omni-0.15.0rc2.dev22+gc779d43af.npu (on npu)
vllm-omni-0.15.0rc2.dev22+gc779d43af.xpu (on xpu)

Following vLLM, this PR also includes the feature to override the version VLLM_OMNI_VERSION_OVERRIDE. However, if developer uses VLLM_OMNI_VERSION_OVERRIDE, platform tag will not automatically be appended. E.g. VLLM_OMNI_VERSION_OVERRIDE=0.16.0 pip install -e . , the version you get is vllm-omni==0.16.0, not vllm-omni==0.16.0+cuda

Retaining the convention set in vLLM repo, SETUPTOOLS_SCM_PRETEND_VERSION is not exposed directly.

Test Plan

Tested with the following command:

VLLM_OMNI_TARGET_DEVICE=xpu python3 -m pip install -e . --no-build-isolation
VLLM_OMNI_TARGET_DEVICE=rocm python3 -m pip install -e . --no-build-isolation

python3 -m pip install -e . --no-build-isolation

# create a tag in my fork with v0.16.0, then 
python3 -m pip install -e . --no-build-isolation

Test Result

vllm-omni-0.15.0rc2.dev22+gc779d43af.rocm
vllm-omni-0.15.0rc2.dev22+gc779d43af.xpu

vllm-omni-0.15.0rc2.dev22+gc779d43af.rocm

vllm-omni-0.16.0+rocm

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
@tjtanaa tjtanaa marked this pull request as ready for review February 5, 2026 15:04
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 234d889b96

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant