Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1447,7 +1447,7 @@ repos:
additional_dependencies:
- tomli
# add ignore words list
args: ["-L", "Mor,ans,thirdparty,subtiles,PARD,pard,therefrom", "--skip", "ATTRIBUTIONS-*.md,*.svg", "--skip", "security_scanning/*", "--skip", "tensorrt_llm/_torch/visual_gen/jit_kernels/*"]
args: ["-L", "Mor,ans,thirdparty,subtiles,PARD,pard,indx,therefrom", "--skip", "ATTRIBUTIONS-*.md,*.svg", "--skip", "security_scanning/*", "--skip", "tensorrt_llm/_torch/visual_gen/jit_kernels/*"]
exclude: 'scripts/attribution/data/cas/.*$'
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.9.4
Expand Down
6 changes: 3 additions & 3 deletions ATTRIBUTIONS-Python.md
Original file line number Diff line number Diff line change
Expand Up @@ -62375,7 +62375,7 @@ Copyright 2018- The Hugging Face team. All rights reserved.
- `Homepage`: https://github.com/huggingface/transformers


## triton (3.5.1)
## triton (3.6.0)

### Licenses
License: `MIT License`
Expand Down Expand Up @@ -62413,7 +62413,7 @@ License: `MIT License`
- `Homepage`: https://github.com/triton-lang/triton/


## triton-kernels (3.5.1)
## triton-kernels (3.6.0)

### Licenses
License: `MIT License`
Expand Down Expand Up @@ -62444,7 +62444,7 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```

### URLs
- `Source`: https://github.com/triton-lang/triton/tree/v3.5.1/python/triton_kernels
- `Source`: https://github.com/triton-lang/triton/tree/v3.6.0/python/triton_kernels


## tritonclient (2.63.0)
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs.<
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/NVIDIA/TensorRT-LLM)
[![python](https://img.shields.io/badge/python-3.12-green)](https://www.python.org/downloads/release/python-3123/)
[![python](https://img.shields.io/badge/python-3.10-green)](https://www.python.org/downloads/release/python-31012/)
[![cuda](https://img.shields.io/badge/cuda-13.1.0-green)](https://developer.nvidia.com/cuda-downloads)
[![torch](https://img.shields.io/badge/torch-2.9.1-green)](https://pytorch.org)
[![cuda](https://img.shields.io/badge/cuda-13.1.1-green)](https://developer.nvidia.com/cuda-downloads)
[![torch](https://img.shields.io/badge/torch-2.10.0-green)](https://pytorch.org)
[![version](https://img.shields.io/badge/release-1.3.0rc9-green)](https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt_llm/version.py)
[![license](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/NVIDIA/TensorRT-LLM/blob/main/LICENSE)

Expand Down
2 changes: 1 addition & 1 deletion cpp/tensorrt_llm/thop/fp8Op.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ std::tuple<Tensor, Tensor> e4m3_quantize_helper(Tensor input, at::optional<Tenso
if (scales.has_value())
{
// static quantization will use float scales by default.
scales_ = scales.value();
scales_ = scales.value().clone();
CHECK_TH_CUDA(scales_);
CHECK_TYPE(scales_, torch::kFloat32);
e4m3_static_quantize(input, quantized_input, scales_, stream, quantize_mode);
Expand Down
8 changes: 4 additions & 4 deletions docker/Dockerfile.multi
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Multi-stage Dockerfile
ARG BASE_IMAGE=nvcr.io/nvidia/pytorch
ARG TRITON_IMAGE=nvcr.io/nvidia/tritonserver
ARG BASE_TAG=25.12-py3
ARG TRITON_BASE_TAG=25.12-py3
ARG BASE_TAG=26.02-py3
ARG TRITON_BASE_TAG=26.02-py3
ARG DEVEL_IMAGE=devel

FROM ${BASE_IMAGE}:${BASE_TAG} AS base
Expand Down Expand Up @@ -52,7 +52,7 @@ RUN --mount=type=bind,source=docker/common,target=/opt/docker/common \
# Install constraints after install.sh so cleanup() doesn't delete the file mid-RUN
COPY constraints.txt /tmp/constraints.txt
RUN --mount=type=cache,target=/root/.cache/pip \
pip3 install --no-cache-dir -r /tmp/constraints.txt && \
pip3 install --ignore-installed --no-cache-dir -r /tmp/constraints.txt && \
rm /tmp/constraints.txt && \
pip3 uninstall -y nbconvert || true

Expand All @@ -67,7 +67,7 @@ RUN --mount=type=bind,source=docker/common,target=/opt/docker/common \
# WAR against https://github.com/advisories/GHSA-58pv-8j8x-9vj2
rm -rf /usr/local/lib/python3.12/dist-packages/setuptools/_vendor/jaraco.context-5.3.0.dist-info && \
# WAR against https://github.com/advisories/GHSA-8rrh-rw8j-w5fx
rm -rf /usr/local/lib/python3.12/dist-packages/setuptools/_vendor/wheel-0.45.1.dist-info
rm -rf /usr/local/lib/python3.12/dist-packages/setuptools/_vendor/wheel-*.dist-info

# Generate OSS attribution file for devel image
ARG TRT_LLM_VER
Expand Down
6 changes: 3 additions & 3 deletions docker/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -202,16 +202,16 @@ jenkins-rockylinux8_%: PYTHON_VERSION_TAG_ID = $(if $(findstring 3.12,${PYTHON_V
jenkins-rockylinux8_%: IMAGE_WITH_TAG = $(shell . ../jenkins/current_image_tags.properties && echo $$LLM_ROCKYLINUX8_${PYTHON_VERSION_TAG_ID}_DOCKER_IMAGE)
jenkins-rockylinux8_%: STAGE = tritondevel
jenkins-rockylinux8_%: BASE_IMAGE = nvcr.io/nvidia/cuda
jenkins-rockylinux8_%: BASE_TAG = 13.1.0-devel-rockylinux8
jenkins-rockylinux8_%: BASE_TAG = 13.1.1-devel-rockylinux8

rockylinux8_%: STAGE = tritondevel
rockylinux8_%: BASE_IMAGE = nvcr.io/nvidia/cuda
rockylinux8_%: BASE_TAG = 13.1.0-devel-rockylinux8
rockylinux8_%: BASE_TAG = 13.1.1-devel-rockylinux8

# For x86_64 and aarch64
ubuntu22_%: STAGE = tritondevel
ubuntu22_%: BASE_IMAGE = nvcr.io/nvidia/cuda
ubuntu22_%: BASE_TAG = 13.1.0-devel-ubuntu22.04
ubuntu22_%: BASE_TAG = 13.1.1-devel-ubuntu22.04

trtllm_%: STAGE = release
trtllm_%: PUSH_TO_STAGING := 0
Expand Down
5 changes: 5 additions & 0 deletions docker/common/install_base.sh
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,14 @@ set_bash_env() {
cleanup() {
# Clean up apt/dnf cache
if [ -f /etc/debian_version ]; then
echo "Removing python3-pygments from Ubuntu..."
apt-get remove -y python3-pygments || true
apt-get autoremove -y || true
apt-get clean
rm -rf /var/lib/apt/lists/*
elif [ -f /etc/redhat-release ]; then
echo "Removing python3-pygments from Rocky Linux..."
dnf remove -y python3-pygments || true
dnf clean all
rm -rf /var/cache/dnf
fi
Expand Down
2 changes: 1 addition & 1 deletion docker/common/install_cuda_toolkit.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ set -ex
# This script is used for reinstalling CUDA on Rocky Linux 8 with the run file.
# CUDA version is usually aligned with the latest NGC CUDA image tag.
# Only use when public CUDA image is not ready.
CUDA_VER="13.1.0_590.44.01"
CUDA_VER="13.1.1_590.48.01"
CUDA_VER_SHORT="${CUDA_VER%_*}"

NVCC_VERSION_OUTPUT=$(nvcc --version)
Expand Down
4 changes: 2 additions & 2 deletions docker/common/install_pytorch.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ set -ex

# Use latest stable version from https://pypi.org/project/torch/#history
# and closest to the version specified in
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-25-12.html#rel-25-12
TORCH_VERSION="2.9.1"
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-26-02.html#rel-26-02
TORCH_VERSION="2.10.0"
SYSTEM_ID=$(grep -oP '(?<=^ID=).+' /etc/os-release | tr -d '"')

prepare_environment() {
Expand Down
20 changes: 8 additions & 12 deletions docker/common/install_tensorrt.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,20 @@

set -ex

TRT_VER="10.14.1.48"
TRT_VER="10.15.1.29"
# Align with the pre-installed cuDNN / cuBLAS / NCCL versions from
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-25-12.html#rel-25-12
CUDA_VER="13.1" # 13.1.0
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-26-02.html#rel-26-02
CUDA_VER="13.1" # 13.1.1
# Keep the installation for cuDNN if users want to install PyTorch with source codes.
# PyTorch 2.x can compile with cuDNN v9.
CUDNN_VER="9.17.0.29-1"
NCCL_VER="2.28.9-1+cuda13.0"
CUBLAS_VER="13.2.0.9-1"
CUDNN_VER="9.19.0.56-1"
NCCL_VER="2.29.2-1+cuda13.1"
CUBLAS_VER="13.2.1.1-1"
# Align with the pre-installed CUDA / NVCC / NVRTC versions from
# https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html
NVRTC_VER="13.1.80-1"
NVRTC_VER="13.1.115-1"
CUDA_RUNTIME="13.1.80-1"
CUDA_DRIVER_VERSION="590.44.01-1.el8"
CUDA_DRIVER_VERSION="590.48.01-1.el8"

for i in "$@"; do
case $i in
Expand Down Expand Up @@ -120,10 +120,6 @@ install_tensorrt() {
PARSED_PY_VERSION=$(echo "${PY_VERSION//./}")

TRT_CUDA_VERSION=${CUDA_VER}
# No CUDA 13.1 version for TensorRT yet. Use CUDA 13.0 package instead.
if [ "$CUDA_VER" = "13.1" ]; then
TRT_CUDA_VERSION="13.0"
fi
TRT_VER_SHORT=$(echo $TRT_VER | cut -d. -f1-3)

if [ -z "$RELEASE_URL_TRT" ];then
Expand Down
2 changes: 1 addition & 1 deletion docs/source/installation/build-from-source-linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Building from Source Code on Linux

This document provides instructions for building TensorRT LLM from source code on Linux. Building from source is recommended for achieving optimal performance, enabling debugging capabilities, or when you need a different [GNU CXX11 ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html) configuration than what is available in the pre-built TensorRT LLM wheel on PyPI. Note that the current pre-built TensorRT LLM wheel on PyPI is linked against PyTorch 2.9.1, which uses the new CXX11 ABI.
This document provides instructions for building TensorRT LLM from source code on Linux. Building from source is recommended for achieving optimal performance, enabling debugging capabilities, or when you need a different [GNU CXX11 ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html) configuration than what is available in the pre-built TensorRT LLM wheel on PyPI. Note that the current pre-built TensorRT LLM wheel on PyPI is linked against PyTorch 2.10.0, which uses the new CXX11 ABI.


## Prerequisites
Expand Down
4 changes: 2 additions & 2 deletions docs/source/installation/linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

```bash
# By default, PyTorch CUDA 12.8 package is installed. Install PyTorch CUDA 13.0 package to align with the CUDA version used for building TensorRT LLM wheels.
pip3 install torch==2.9.1 torchvision --index-url https://download.pytorch.org/whl/cu130
pip3 install torch==2.10.0 torchvision --index-url https://download.pytorch.org/whl/cu130

sudo apt-get -y install libopenmpi-dev

Expand All @@ -40,7 +40,7 @@
pip3 install --ignore-installed pip setuptools wheel && pip3 install tensorrt_llm
```

> **Note:** The TensorRT LLM wheel on PyPI is built with PyTorch 2.9.1. This version may be incompatible with the NVIDIA NGC PyTorch 25.12 container, which uses a more recent PyTorch build from the main branch. If you are using this container or a similar environment, please install the pre-built wheel located at `/app/tensorrt_llm` inside the TensorRT LLM NGC Release container instead.
> **Note:** The TensorRT LLM wheel on PyPI is built with PyTorch 2.10.0. This version may be incompatible with the NVIDIA NGC PyTorch 25.12 container, which uses a more recent PyTorch build from the main branch. If you are using this container or a similar environment, please install the pre-built wheel located at `/app/tensorrt_llm` inside the TensorRT LLM NGC Release container instead.

**This project will download and install additional third-party open source software projects. Review the license terms of these open source projects before use.**

Expand Down
2 changes: 1 addition & 1 deletion docs/source/legacy/reference/support-matrix.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ The following table shows the supported software for TensorRT-LLM.
* -
- Software Compatibility
* - Container
- [25.12](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html)
- [26.02](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html)
* - TensorRT
- [10.14](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/index.html)
* - Precision
Expand Down
2 changes: 1 addition & 1 deletion jenkins/Build.groovy
Original file line number Diff line number Diff line change
Expand Up @@ -438,7 +438,7 @@ def runLLMBuild(pipeline, buildFlags, tarName, is_linux_x86_64)
def llmPath = sh (script: "realpath ${LLM_ROOT}",returnStdout: true).trim()
// TODO: Remove after the cmake version is upgraded to 3.31.8
// Get triton tag from docker/dockerfile.multi
def tritonShortTag = "r25.12"
def tritonShortTag = "r26.02"
sh "cd ${LLM_ROOT}/triton_backend/inflight_batcher_llm && mkdir build && cd build && cmake .. -DTRTLLM_DIR=${llmPath} -DTRITON_COMMON_REPO_TAG=${tritonShortTag} -DTRITON_CORE_REPO_TAG=${tritonShortTag} -DTRITON_THIRD_PARTY_REPO_TAG=${tritonShortTag} -DTRITON_BACKEND_REPO_TAG=${tritonShortTag} -DUSE_CXX11_ABI=ON && make -j${buildJobs} install"

// Step 3: packaging wheels into tarfile
Expand Down
42 changes: 24 additions & 18 deletions jenkins/L0_Test.groovy
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ LLM_ROCKYLINUX8_PY310_DOCKER_IMAGE = env.wheelDockerImagePy310
LLM_ROCKYLINUX8_PY312_DOCKER_IMAGE = env.wheelDockerImagePy312

// DLFW torch image
DLFW_IMAGE = "urm.nvidia.com/docker/nvidia/pytorch:25.12-py3"
DLFW_IMAGE = "urm.nvidia.com/docker/nvidia/pytorch:26.02-py3"

//Ubuntu base image
UBUNTU_22_04_IMAGE = "urm.nvidia.com/docker/ubuntu:22.04"
Expand Down Expand Up @@ -1146,6 +1146,7 @@ def runLLMTestlistWithSbatch(pipeline, platform, testList, config=VANILLA_CONFIG

// Output is the corresponding scriptLaunchPathLocal script under the disaggMode
sh """
pip3 install pyyaml && \\
python3 ${scriptSubmitLocalPath} \\
--run-ci \\
--llm-src ${llmSrcLocal} \\
Expand Down Expand Up @@ -1957,7 +1958,7 @@ def launchTestListCheck(pipeline)
def llmPath = sh (script: "realpath .", returnStdout: true).trim()
def llmSrc = "${llmPath}/TensorRT-LLM/src"
trtllm_utils.llmExecStepWithRetry(pipeline, script: "pip3 install -r ${llmSrc}/requirements-dev.txt")
sh "NVIDIA_TRITON_SERVER_VERSION=25.12 LLM_ROOT=${llmSrc} LLM_BACKEND_ROOT=${llmSrc}/triton_backend python3 ${llmSrc}/scripts/check_test_list.py --l0 --qa --waive"
sh "NVIDIA_TRITON_SERVER_VERSION=26.02 LLM_ROOT=${llmSrc} LLM_BACKEND_ROOT=${llmSrc}/triton_backend python3 ${llmSrc}/scripts/check_test_list.py --l0 --qa --waive"
} catch (InterruptedException e) {
throw e
} catch (Exception e) {
Expand Down Expand Up @@ -2967,17 +2968,19 @@ def runLLMBuild(pipeline, cpu_arch, reinstall_dependencies=false, wheel_path="",
echo "uploading ${wheelName} to ${cpu_arch}/${wheel_path}"
trtllm_utils.uploadArtifacts("tensorrt_llm/build/${wheelName}", "${UPLOAD_PATH}/${cpu_arch}/${wheel_path}")

if (reinstall_dependencies == true) {
if (reinstall_dependencies) {
// Test installation in the new environment
def pip_keep = "-e 'pip'"
// Reserve CUDA 13.0 torch and torchvision packages
def pip_keep = "^pip==|^torch==|^torchvision=="
def remove_trt = "rm -rf /usr/local/tensorrt"
if (env.alternativeTRT) {
pip_keep += " -e tensorrt"
pip_keep += "|^tensorrt=="
remove_trt = "echo keep /usr/local/tensorrt"
}
sh "#!/bin/bash \n" + "pip3 list --format=freeze | egrep -v ${pip_keep} | xargs pip3 uninstall -y"
sh "#!/bin/bash \n" + "yum remove -y libcudnn* libnccl* libcublas* && ${remove_trt}"
sh "bash -c 'pip3 list --format=freeze | grep -Ev \"${pip_keep}\" | xargs -r pip3 uninstall -y'"
sh "bash -c 'yum remove -y libcudnn* libnccl* libcublas* && ${remove_trt}'"
}

// Test preview installation
trtllm_utils.llmExecStepWithRetry(pipeline, script: "#!/bin/bash \n" + "cd tensorrt_llm/ && pip3 install pytest build/tensorrt_llm-*.whl")
if (env.alternativeTRT) {
Expand Down Expand Up @@ -3016,17 +3019,22 @@ def runPackageSanityCheck(pipeline, wheel_path, reinstall_dependencies=false, cp
trtllm_utils.replaceWithAlternativeTRT(env.alternativeTRT, cpver)
sh "bash -c 'pip3 show tensorrt || true'"
}

if (reinstall_dependencies) {
// Test installation in the new environment
def pip_keep = "-e 'pip'"
// Reserve CUDA 13.0 torch and torchvision packages
def pip_keep = "^pip==|^torch==|^torchvision=="
def remove_trt = "rm -rf /usr/local/tensorrt"
if (env.alternativeTRT) {
pip_keep += " -e tensorrt"
pip_keep += "|^tensorrt=="
remove_trt = "echo keep /usr/local/tensorrt"
}
sh "bash -c 'pip3 list --format=freeze | egrep -v ${pip_keep} | xargs pip3 uninstall -y'"
sh "bash -c 'pip3 list --format=freeze | grep -Ev \"${pip_keep}\" | xargs -r pip3 uninstall -y'"
sh "bash -c 'yum remove -y libcudnn* libnccl* libcublas* && ${remove_trt}'"
}
//WAR: remove python3-pygments first since it is installed in NGC PyTorch image
trtllm_utils.llmExecStepWithRetry(pipeline, script: "apt-get remove -y python3-pygments")

// Test preview installation
trtllm_utils.llmExecStepWithRetry(pipeline, script: "bash -c 'pip3 install pytest tensorrt_llm-*.whl'")
if (env.alternativeTRT) {
Expand Down Expand Up @@ -3477,7 +3485,7 @@ def launchTestJobs(pipeline, testFilter)
// Python version and OS for sanity check
x86SanityCheckConfigs = [
"PY312-DLFW": [
LLM_DOCKER_IMAGE, // Workaround ABI incompatibilities between PyTorch 2.9.1 and 2.10.0a0
LLM_ROCKYLINUX8_PY312_DOCKER_IMAGE, // Workaround ABI incompatibilities between PyTorch 2.9.1 and 2.10.0a0
"B200_PCIe",
X86_64_TRIPLE,
false,
Expand Down Expand Up @@ -3515,8 +3523,8 @@ def launchTestJobs(pipeline, testFilter)
AARCH64_TRIPLE,
false,
"",
DLFW_IMAGE,
false, // Extra PyTorch CUDA 13.0 install
UBUNTU_24_04_IMAGE,
true, // Extra PyTorch CUDA 13.0 install
],
"PY312-DLFW": [
LLM_DOCKER_IMAGE,
Expand Down Expand Up @@ -3604,6 +3612,8 @@ def launchTestJobs(pipeline, testFilter)
// Clean up the pip constraint file from the base NGC PyTorch image.
if (values[5] == DLFW_IMAGE) {
trtllm_utils.llmExecStepWithRetry(pipeline, script: "[ -f /etc/pip/constraint.txt ] && : > /etc/pip/constraint.txt || true")
// Remove the python3-pygments pip package because the dlfw image already includes a Debian pygments package, which conflicts with the pip-installed version.
trtllm_utils.llmExecStepWithRetry(pipeline, script: "apt-get remove -y python3-pygments")
}
trtllm_utils.llmExecStepWithRetry(pipeline, script: "apt-get update && apt-get install -y python3-pip git rsync curl wget")
trtllm_utils.checkoutSource(LLM_REPO, env.gitlabCommit, LLM_ROOT, false, true)
Expand All @@ -3622,11 +3632,7 @@ def launchTestJobs(pipeline, testFilter)
echo "###### Extra PyTorch CUDA 13.0 install Start ######"
// Use internal mirror instead of https://download.pytorch.org/whl/cu130 for better network stability.
// PyTorch CUDA 13.0 package and torchvision package can be installed as expected.
if (k8s_arch == "amd64") {
trtllm_utils.llmExecStepWithRetry(pipeline, script: "pip3 install torch==2.9.1+cu130 torchvision==0.24.1+cu130 --extra-index-url https://urm.nvidia.com/artifactory/api/pypi/pytorch-cu128-remote/simple --extra-index-url https://download.pytorch.org/whl/cu130")
} else {
trtllm_utils.llmExecStepWithRetry(pipeline, script: "pip3 install torch==2.9.1+cu130 torchvision==0.24.1 --extra-index-url https://urm.nvidia.com/artifactory/api/pypi/pytorch-cu128-remote/simple --extra-index-url https://download.pytorch.org/whl/cu130")
}
trtllm_utils.llmExecStepWithRetry(pipeline, script: "pip3 install torch==2.10.0+cu130 torchvision==0.25.0+cu130 --extra-index-url https://urm.nvidia.com/artifactory/api/pypi/pytorch-cu128-remote/simple --extra-index-url https://download.pytorch.org/whl/cu130")
}

def libEnv = []
Expand Down
Loading
Loading