Upgrade base image to pytorch:26.03-py3 and remove Apex/TE/Megatron/TRT-LLM#15611
Open
Upgrade base image to pytorch:26.03-py3 and remove Apex/TE/Megatron/TRT-LLM#15611
Conversation
thomasdhc
previously approved these changes
Apr 15, 2026
blisc
previously approved these changes
Apr 15, 2026
…RT-LLM The CI container had torchao 0.14.0 which is incompatible with recent peft (requires >=0.16.0), causing test_salm_lora to fail. Upgrade the base NGC PyTorch image to 26.03 (torchao 0.17, PyTorch 2.11) and drop all build/install machinery for Apex, TransformerEngine, Megatron-LM, and TensorRT-LLM since NeMo is now Speech AI only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Re-add trt-llm entry in manifest.json, trt()/trtllm() functions in install_dep.sh, TRTLLM build args in CI workflows, trt_llm.patch, and the external/patches bind mount in Dockerfile.ci. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
The wheel package is debian-installed without a RECORD file in pytorch:26.03-py3, causing 'pip install -U wheel' to fail with 'uninstall-no-record-file'. Install wheel separately with --ignore-installed so pip places a newer version in the higher-priority /usr/local site-packages directory. Signed-off-by: Piotr Żelasko <petezor@gmail.com>
PyYAML is debian-installed without a RECORD file in pytorch:26.03-py3, so 'pip install .[all,cu12]' fails with 'uninstall-no-record-file' when transformers/peft try to upgrade it. Add PyYAML to the --ignore-installed pre-install alongside wheel, using the same shadowing strategy. Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
9407cc6 to
e60e773
Compare
The pytorch:26.03-py3 NGC base image ships with CUDA 13, so installing cu12-suffixed wheels (cuda-python<13, numba-cuda[cu12], nvidia-cuda-*-cu12) alongside the CUDA 13 runtime is wasteful and can cause conflicts. Switch .[all,cu12] to .[all,cu13] in Dockerfile.ci. Note: the docs workflows still reference --no-extra cu12 and need to be updated by someone with workflow-scope token access. Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
The pytorch:26.03-py3 base ships a pre-release torch 2.11.0a0+nv26.3 build (with matching torchvision/torchaudio pinned to the same local version). Pip's default "prefer stable" policy re-downloads a stable PyPI torch (2.10.0/2.9.1) when resolving nemo_toolkit's torch>=2.6.0, then torchvision breaks because it expects the exact nv26.3 torch build. Generate a constraints file from the installed torch/torchvision/torchaudio/ triton versions and point PIP_CONSTRAINT at it for the .[all,cu13] install, so pip keeps the NGC-provided builds. Also drop 'rm -rf $NEMO_DIR || true' — NEMO_DIR is set nowhere in the repo, so the line is a silent no-op today and a latent footgun if the variable is ever exported. Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
The grep for torch/torchvision/torchaudio/triton lines aborted the build heredoc under bash -e when any segment happened to produce no match. Append || true so an empty pin file is harmless, and echo the contents so future failures are diagnosable from annotations alone (the full buildx log is behind an Azure blob that many sandboxed environments can't reach). Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Every test job's pre-setup runs 'cp -r /opt/Megatron-LM/ /workspace/', which fails now that Megatron-LM is no longer installed in the CI container. The cp returns non-zero, set -e kills the docker exec, and every test variant (import, L0 setup, unit GPU/CPU x Common/Core/Hydra/ Others) fails before running a single test. Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
The workflows referenced NVIDIA/NeMo/.github/actions/test-template@main, which fetches the action from the main branch of the repo rather than the PR branch. This meant PR-branch edits to action.yml (like removing the '/opt/Megatron-LM/' cp that no longer applies) had no effect — the CI kept using the stale main-branch action. Switch to './.github/actions/test-template' and add a plain actions/checkout@v6 (no path) right before each use so the action.yml is present at the default workspace when GitHub resolves the local ref. Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The CI container had torchao 0.14.0 which is incompatible with recent peft (requires >=0.16.0), causing test_salm_lora to fail. Upgrade the base NGC PyTorch image to 26.03 (torchao 0.17, PyTorch 2.11) and drop all build/install machinery for Apex, TransformerEngine, Megatron-LM, and TensorRT-LLM since NeMo is now Speech AI only.
Important
The
Update branchbutton must only be pressed in very rare occassions.An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.
What does this PR do ?
Upgrade base image to pytorch:26.03-py3 and remove Apex/TE/Megatron/TRT-LLM.
The CI container had torchao 0.14.0 which is incompatible with recent peft (requires >=0.16.0), causing test_salm_lora to fail. Upgrade the base NGC PyTorch image to 26.03 (torchao 0.17, PyTorch 2.11) and drop all build/install machinery for Apex, TransformerEngine, Megatron-LM, and TensorRT-LLM since NeMo is now Speech AI only.
Collection: CI
Changelog
Usage
# Add a code snippet demonstrating how to use thisGitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information