Skip to content

feat: add NVIDIA RTX 50 series (Blackwell) and CUDA support#9598

Open
youtalk wants to merge 2 commits intocarla-simulator:ue5-devfrom
youtalk:feat/nvidia-cuda-support
Open

feat: add NVIDIA RTX 50 series (Blackwell) and CUDA support#9598
youtalk wants to merge 2 commits intocarla-simulator:ue5-devfrom
youtalk:feat/nvidia-cuda-support

Conversation

@youtalk
Copy link

@youtalk youtalk commented Mar 20, 2026

Description

Add NVIDIA RTX 50 series (Blackwell) GPU support with driver 570+ and CUDA 12.8+ compatibility, while maintaining backward compatibility with existing RTX 30/40 series environments.

Changes:

  • CMake CUDA detection (CMake/CUDA.cmake): Optional CUDA Toolkit detection via find_package(QUIET), with PyTorch validation (warning if CUDA missing, error if below minimum version)
  • CMake options (CMake/Options.cmake): CARLA_CUDA_ARCHITECTURES (sm_75–sm_120) and CARLA_CUDA_MIN_VERSION (11.0)
  • Docker (Util/Docker/Release.Dockerfile): Bake NVIDIA_VISIBLE_DEVICES and NVIDIA_DRIVER_CAPABILITIES ENV vars
  • Docker docs (Docs/build_docker.md, Docs/start_quickstart.md): CDI-based NVIDIA Container Toolkit v2 as default, legacy --runtime=nvidia as fallback
  • GPU/driver requirements (Docs/start_quickstart.md): RTX 50 series needs driver 570+, CUDA 12.8+ recommended for Blackwell
  • Hardware recommendations (README.md): Add RTX 5090

Related:

Where has this been tested?

  • Platform(s): Linux (Ubuntu 24.04)
  • Python version(s): N/A (docs and CMake only)
  • Unreal Engine version(s): 5.5

Possible Drawbacks

None — CUDA detection is fully optional (find_package(QUIET)), existing builds without CUDA are unaffected. Docker legacy runtime is documented as fallback.


This change is Reviewable

youtalk added 2 commits March 20, 2026 11:51
- Add CMake CUDA Toolkit detection (CMake/CUDA.cmake) with
  architecture validation and PyTorch integration
- Add CARLA_CUDA_ARCHITECTURES and CARLA_CUDA_MIN_VERSION options
- Bake NVIDIA_VISIBLE_DEVICES/DRIVER_CAPABILITIES in Dockerfile
- Update Docker docs to CDI-based NVIDIA Container Toolkit v2
  with legacy runtime fallback
- Update GPU/driver requirements for RTX 50 series (driver 570+)
- Add RTX 5090 to README hardware recommendations
Signed-off-by: Yutaka Kondo <yutaka.kondo@youtalk.jp>
@youtalk youtalk marked this pull request as ready for review March 23, 2026 22:10
@youtalk youtalk requested a review from a team as a code owner March 23, 2026 22:10
Copilot AI review requested due to automatic review settings March 23, 2026 22:10
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates CARLA’s build + Docker documentation to reflect NVIDIA RTX 50 series (Blackwell) support expectations, and introduces optional CUDA Toolkit detection/validation in CMake to better guide CUDA/PyTorch-enabled builds.

Changes:

  • Add a new CMake module (CMake/CUDA.cmake) to optionally detect the CUDA Toolkit and validate minimum CUDA when ENABLE_PYTORCH=ON.
  • Update Docker image/runtime guidance to prefer NVIDIA Container Toolkit v2 CDI device injection, with legacy runtime examples retained.
  • Refresh hardware/docs/changelog to mention RTX 50 series (e.g., RTX 5090) and newer driver/CUDA recommendations.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
Util/Docker/Release.Dockerfile Bakes NVIDIA runtime env vars into the release image.
README.md Updates recommended GPU list to explicitly mention RTX 5090.
Docs/start_quickstart.md Updates GPU/driver/CUDA guidance and switches Docker examples to CDI-first.
Docs/build_docker.md Updates Docker prerequisites and run commands to CDI-first, with legacy fallback.
CMakeLists.txt Includes the new CUDA detection module during configure.
CMake/Options.cmake Adds CUDA-related configuration variables intended for PyTorch/CUDA builds.
CMake/CUDA.cmake Implements optional CUDA Toolkit detection + version checks.
CHANGELOG.md Notes the new RTX 50/CUDA/CDI documentation changes.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +47 to +57
carla_string_option (
CARLA_CUDA_ARCHITECTURES
"Semicolon-separated list of CUDA GPU architectures (e.g. 75;80;86;89;100;120). Used when ENABLE_PYTORCH=ON and CUDA is available."
"75;80;86;89;100;120"
)

carla_string_option (
CARLA_CUDA_MIN_VERSION
"Minimum required CUDA Toolkit version when ENABLE_PYTORCH=ON."
"11.0"
)
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CARLA_CUDA_ARCHITECTURES / CARLA_CUDA_MIN_VERSION are added as “options”, but carla_string_option() only does a plain set() (non-CACHE) which overwrites any -D... value from the command line. If these are intended to be user-configurable, define them as CACHE STRING entries (and only set the default when they’re not already defined) so users can override them in a standard CMake way.

Suggested change
carla_string_option (
CARLA_CUDA_ARCHITECTURES
"Semicolon-separated list of CUDA GPU architectures (e.g. 75;80;86;89;100;120). Used when ENABLE_PYTORCH=ON and CUDA is available."
"75;80;86;89;100;120"
)
carla_string_option (
CARLA_CUDA_MIN_VERSION
"Minimum required CUDA Toolkit version when ENABLE_PYTORCH=ON."
"11.0"
)
# CUDA-related options: define as CACHE STRING so users can override via -D.
if (NOT DEFINED CARLA_CUDA_ARCHITECTURES)
set(CARLA_CUDA_ARCHITECTURES
"75;80;86;89;100;120"
CACHE STRING
"Semicolon-separated list of CUDA GPU architectures (e.g. 75;80;86;89;100;120). Used when ENABLE_PYTORCH=ON and CUDA is available.")
endif()
if (NOT DEFINED CARLA_CUDA_MIN_VERSION)
set(CARLA_CUDA_MIN_VERSION
"11.0"
CACHE STRING
"Minimum required CUDA Toolkit version when ENABLE_PYTORCH=ON.")
endif()

Copilot uses AI. Check for mistakes.
Comment on lines +5 to +24
find_package (CUDAToolkit QUIET)

if (CUDAToolkit_FOUND)
carla_message ("CUDA Toolkit ${CUDAToolkit_VERSION} found at ${CUDAToolkit_TARGET_DIR}")

if (CUDAToolkit_VERSION VERSION_LESS "12.8")
carla_message (
"CUDA ${CUDAToolkit_VERSION} detected. "
"CUDA 12.8+ is recommended for NVIDIA Blackwell (RTX 50 series / sm_100 / sm_120) GPU support."
)
endif ()
else ()
carla_message ("CUDA Toolkit not found. CUDA-dependent features will be unavailable.")
endif ()

# ================================
# PyTorch + CUDA Validation
# ================================

if (ENABLE_PYTORCH)
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CUDA detection is fully optional, but this file will always emit STATUS output during configure (including the “CUDA Toolkit not found…” message) even when ENABLE_PYTORCH is OFF. Consider moving both the find_package(CUDAToolkit) call and the related messages under if(ENABLE_PYTORCH) and/or using carla_message_verbose() so default configure output stays clean for non-CUDA builds.

Suggested change
find_package (CUDAToolkit QUIET)
if (CUDAToolkit_FOUND)
carla_message ("CUDA Toolkit ${CUDAToolkit_VERSION} found at ${CUDAToolkit_TARGET_DIR}")
if (CUDAToolkit_VERSION VERSION_LESS "12.8")
carla_message (
"CUDA ${CUDAToolkit_VERSION} detected. "
"CUDA 12.8+ is recommended for NVIDIA Blackwell (RTX 50 series / sm_100 / sm_120) GPU support."
)
endif ()
else ()
carla_message ("CUDA Toolkit not found. CUDA-dependent features will be unavailable.")
endif ()
# ================================
# PyTorch + CUDA Validation
# ================================
if (ENABLE_PYTORCH)
# ================================
# PyTorch + CUDA Validation
# ================================
if (ENABLE_PYTORCH)
find_package (CUDAToolkit QUIET)
if (CUDAToolkit_FOUND)
carla_message ("CUDA Toolkit ${CUDAToolkit_VERSION} found at ${CUDAToolkit_TARGET_DIR}")
if (CUDAToolkit_VERSION VERSION_LESS "12.8")
carla_message (
"CUDA ${CUDAToolkit_VERSION} detected. "
"CUDA 12.8+ is recommended for NVIDIA Blackwell (RTX 50 series / sm_100 / sm_120) GPU support."
)
endif ()
else ()
carla_message ("CUDA Toolkit not found. CUDA-dependent features will be unavailable.")
endif ()

Copilot uses AI. Check for mistakes.
Comment on lines +39 to +41
add_compile_definitions (CARLA_WITH_CUDA)
set (CMAKE_CUDA_ARCHITECTURES "${CARLA_CUDA_ARCHITECTURES}")
carla_message ("CUDA architectures: ${CMAKE_CUDA_ARCHITECTURES}")
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add_compile_definitions(CARLA_WITH_CUDA) and set(CMAKE_CUDA_ARCHITECTURES ...) don’t appear to have any effect right now: the repo doesn’t enable the CUDA language and there are no CUDA sources, and CARLA_WITH_CUDA isn’t referenced anywhere. Either wire these settings into the actual places that build/use PyTorch/CUDA (e.g., define the existing WITH_PYTORCH / relevant build flags) or drop them to avoid suggesting configurability/behavior that isn’t currently present.

Suggested change
add_compile_definitions (CARLA_WITH_CUDA)
set (CMAKE_CUDA_ARCHITECTURES "${CARLA_CUDA_ARCHITECTURES}")
carla_message ("CUDA architectures: ${CMAKE_CUDA_ARCHITECTURES}")
carla_message ("CUDA architectures: ${CARLA_CUDA_ARCHITECTURES}")

Copilot uses AI. Check for mistakes.
* __An adequate GPU__: CARLA aims for realistic simulations that require a lot of compute power. We recommend at minimum an NVIDIA RTX 3000 series or better with at least **16 Gb of VRAM**. A dedicated GPU, separate from the GPU used for CARLA, is highly recommended to handle large machine learning workloads.
* __GPU drivers__: CARLA Unreal Engine 5 version requires NVIDIA RTX driver release **550 or later for Ubuntu** and NVIDIA RTX driver release **560 or later for Windows**
* __An adequate GPU__: CARLA aims for realistic simulations that require a lot of compute power. We recommend at minimum an NVIDIA RTX 3000 series or better with at least **16 Gb of VRAM**. NVIDIA RTX 40 and 50 series GPUs are also supported. A dedicated GPU, separate from the GPU used for CARLA, is highly recommended to handle large machine learning workloads.
* __GPU drivers__: CARLA Unreal Engine 5 version requires NVIDIA RTX driver release **550 or later for Ubuntu** and NVIDIA RTX driver release **560 or later for Windows**. For NVIDIA RTX 50 series (Blackwell) GPUs, driver release **570 or later** is required. CUDA Toolkit **11.0** or later is required for PyTorch GPU features (CUDA **12.8+** recommended for Blackwell GPUs).
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bullet mixes runtime driver requirements with optional build-time CUDA Toolkit guidance. To avoid users reading this as “CUDA is required to run CARLA”, consider splitting into a separate “CUDA Toolkit (optional, for PyTorch GPU features)” requirement (and keep the GPU driver line focused on drivers only).

Suggested change
* __GPU drivers__: CARLA Unreal Engine 5 version requires NVIDIA RTX driver release **550 or later for Ubuntu** and NVIDIA RTX driver release **560 or later for Windows**. For NVIDIA RTX 50 series (Blackwell) GPUs, driver release **570 or later** is required. CUDA Toolkit **11.0** or later is required for PyTorch GPU features (CUDA **12.8+** recommended for Blackwell GPUs).
* __GPU drivers__: CARLA Unreal Engine 5 version requires NVIDIA RTX driver release **550 or later for Ubuntu** and NVIDIA RTX driver release **560 or later for Windows**. For NVIDIA RTX 50 series (Blackwell) GPUs, driver release **570 or later** is required.
* __CUDA Toolkit (optional, for PyTorch GPU features)__: CUDA Toolkit **11.0** or later is required for PyTorch GPU features (CUDA **12.8+** recommended for Blackwell GPUs).

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants