Skip to content

Upgrade base image to pytorch:26.03-py3 and remove Apex/TE/Megatron/TRT-LLM#15611

Open
pzelasko wants to merge 11 commits intomainfrom
container-cleanup
Open

Upgrade base image to pytorch:26.03-py3 and remove Apex/TE/Megatron/TRT-LLM#15611
pzelasko wants to merge 11 commits intomainfrom
container-cleanup

Conversation

@pzelasko
Copy link
Copy Markdown
Collaborator

@pzelasko pzelasko commented Apr 15, 2026

The CI container had torchao 0.14.0 which is incompatible with recent peft (requires >=0.16.0), causing test_salm_lora to fail. Upgrade the base NGC PyTorch image to 26.03 (torchao 0.17, PyTorch 2.11) and drop all build/install machinery for Apex, TransformerEngine, Megatron-LM, and TensorRT-LLM since NeMo is now Speech AI only.

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Upgrade base image to pytorch:26.03-py3 and remove Apex/TE/Megatron/TRT-LLM.

The CI container had torchao 0.14.0 which is incompatible with recent peft (requires >=0.16.0), causing test_salm_lora to fail. Upgrade the base NGC PyTorch image to 26.03 (torchao 0.17, PyTorch 2.11) and drop all build/install machinery for Apex, TransformerEngine, Megatron-LM, and TensorRT-LLM since NeMo is now Speech AI only.

Collection: CI

Changelog

  • Upgrade base container image to pytorch:26.03-py3 and remove Apex/TE/Megatron/TRT-LLM.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

thomasdhc
thomasdhc previously approved these changes Apr 15, 2026
blisc
blisc previously approved these changes Apr 15, 2026
pzelasko and others added 4 commits April 17, 2026 16:29
…RT-LLM

The CI container had torchao 0.14.0 which is incompatible with recent
peft (requires >=0.16.0), causing test_salm_lora to fail. Upgrade the
base NGC PyTorch image to 26.03 (torchao 0.17, PyTorch 2.11) and drop
all build/install machinery for Apex, TransformerEngine, Megatron-LM,
and TensorRT-LLM since NeMo is now Speech AI only.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Re-add trt-llm entry in manifest.json, trt()/trtllm() functions in
install_dep.sh, TRTLLM build args in CI workflows, trt_llm.patch, and
the external/patches bind mount in Dockerfile.ci.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
The wheel package is debian-installed without a RECORD file in
pytorch:26.03-py3, causing 'pip install -U wheel' to fail with
'uninstall-no-record-file'. Install wheel separately with
--ignore-installed so pip places a newer version in the higher-priority
/usr/local site-packages directory.

Signed-off-by: Piotr Żelasko <petezor@gmail.com>
PyYAML is debian-installed without a RECORD file in pytorch:26.03-py3,
so 'pip install .[all,cu12]' fails with 'uninstall-no-record-file' when
transformers/peft try to upgrade it. Add PyYAML to the --ignore-installed
pre-install alongside wheel, using the same shadowing strategy.

Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
The pytorch:26.03-py3 NGC base image ships with CUDA 13, so installing
cu12-suffixed wheels (cuda-python<13, numba-cuda[cu12], nvidia-cuda-*-cu12)
alongside the CUDA 13 runtime is wasteful and can cause conflicts. Switch
.[all,cu12] to .[all,cu13] in Dockerfile.ci.

Note: the docs workflows still reference --no-extra cu12 and need to be
updated by someone with workflow-scope token access.

Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
The pytorch:26.03-py3 base ships a pre-release torch 2.11.0a0+nv26.3 build
(with matching torchvision/torchaudio pinned to the same local version).
Pip's default "prefer stable" policy re-downloads a stable PyPI torch
(2.10.0/2.9.1) when resolving nemo_toolkit's torch>=2.6.0, then torchvision
breaks because it expects the exact nv26.3 torch build.

Generate a constraints file from the installed torch/torchvision/torchaudio/
triton versions and point PIP_CONSTRAINT at it for the .[all,cu13] install,
so pip keeps the NGC-provided builds.

Also drop 'rm -rf $NEMO_DIR || true' — NEMO_DIR is set nowhere in the repo,
so the line is a silent no-op today and a latent footgun if the variable
is ever exported.

Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
The grep for torch/torchvision/torchaudio/triton lines aborted the build
heredoc under bash -e when any segment happened to produce no match.
Append || true so an empty pin file is harmless, and echo the contents so
future failures are diagnosable from annotations alone (the full buildx
log is behind an Azure blob that many sandboxed environments can't reach).

Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Every test job's pre-setup runs 'cp -r /opt/Megatron-LM/ /workspace/',
which fails now that Megatron-LM is no longer installed in the CI
container. The cp returns non-zero, set -e kills the docker exec, and
every test variant (import, L0 setup, unit GPU/CPU x Common/Core/Hydra/
Others) fails before running a single test.

Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
The workflows referenced NVIDIA/NeMo/.github/actions/test-template@main,
which fetches the action from the main branch of the repo rather than
the PR branch. This meant PR-branch edits to action.yml (like removing
the '/opt/Megatron-LM/' cp that no longer applies) had no effect — the
CI kept using the stale main-branch action.

Switch to './.github/actions/test-template' and add a plain
actions/checkout@v6 (no path) right before each use so the action.yml
is present at the default workspace when GitHub resolves the local ref.

Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants