Skip to content

Releases: NVIDIA/nvidia-container-toolkit

v1.13.0-rc.2

15 Mar 08:01

Choose a tag to compare

v1.13.0-rc.2 Pre-release
Pre-release
  • Don't fail chmod hook if paths are not injected
  • Only create by-path symlinks if CDI devices are actually requested.
  • Fix possible blank nvidia-ctk path in generated CDI specifications
  • Fix error in postun scriplet on RPM-based systems
  • Only check NVIDIA_VISIBLE_DEVICES for environment variables if no annotations are specified.
  • Add cdi.default-kind config option for constructing fully-qualified CDI device names in CDI mode
  • Add support for accept-nvidia-visible-devices-envvar-unprivileged config setting in CDI mode
  • Add nvidia-container-runtime-hook.skip-mode-detection config option to bypass mode detection. This allows legacy and cdi mode, for example, to be used at the same time.
  • Add support for generating CDI specifications for GDS and MOFED devices
  • Ensure CDI specification is validated on save when generating a spec
  • Rename --discovery-mode argument to --mode for nvidia-ctk cdi generate

Changes in the toolkit-container

  • Add --cdi-enabled flag to toolkit config
  • Install nvidia-ctk from toolkit container
  • Use installed nvidia-ctk path in NVIDIA Container Toolkit config
  • Bump CUDA base images to 12.1.0
  • Set nvidia-ctk path in the
  • Add cdi.k8s.io/* to set of allowed annotations in containerd config
  • Generate CDI specification for use in management containers
  • Install experimental runtime as nvidia-container-runtime.experimental instead of nvidia-container-runtime-experimental
  • Install and configure mode-specific runtimes for cdi and legacy modes

Changes from libnvidia-container v1.13.0-rc.2

  • Fix segfault on WSL2 systems. This was triggered in the v1.12.1 and v1.13.0-rc.1 releases.

Full Changelog: v1.13.0-rc.1...v1.13.0-rc.2

v1.12.1

13 Mar 14:14

Choose a tag to compare

  • Don't fail chmod hook if paths are not injected. Fixes known issue in v1.12.0 release
  • Fix possible blank nvidia-ctk path in generated CDI specifications
  • Fix error in postun scriplet on RPM-based systems
  • Fix missing NVML symbols when running nvidia-ctk on some platforms [#49]
  • Discover all gsb*.bin GSP firmware files when generating CDI specification.
  • Remove fedora35 packaging targets

Changes in toolkit-container

  • Install nvidia-ctk from toolkit container
  • Use installed nvidia-ctk path in NVIDIA Container Toolkit config
  • Bump CUDA base images to 12.1.0

Changes from libnvidia-container v1.12.1

  • Include all gsp*.bin firmware files if present

Full Changelog: v1.12.0...v1.12.1

v1.13.0-rc.1

21 Feb 10:46

Choose a tag to compare

v1.13.0-rc.1 Pre-release
Pre-release
  • Include MIG-enabled devices as GPUs when generating CDI specification
  • Fix missing NVML symbols when running nvidia-ctk on some platforms [#49]
  • Add CDI spec generation for WSL2-based systems to nvidia-ctk cdi generate command
  • Add auto mode to nvidia-ctk cdi generate command to automatically detect a WSL2-based system over a standard NVML-based system.
  • Add mode-specific (.cdi and .legacy) NVIDIA Container Runtime binaries for use in the GPU Operator
  • Discover all gsb*.bin GSP firmware files when generating CDI specification.
  • Align .deb and .rpm release candidate package versions
  • Remove fedora35 packaging targets

Changes in toolkit-container

  • Install nvidia-container-toolkit-operator-extensions package for mode-specific executables.
  • Allow nvidia-container-runtime.mode to be set when configuring the NVIDIA Container Toolkit

Changes from libnvidia-container v1.13.0-rc.1

  • Include all gsp*.bin firmware files if present
  • Align .deb and .rpm release candidate package versions
  • Remove fedora35 packaging targets

Full Changelog: v1.12.0...v1.13.0-rc.1

Known Issues

Failure to run container due to missing /dev/dri and / or /dev/nvidia-caps paths in container

As of v1.12.0 using a CDI Specification generated with the nvidia-ctk cdi generate command may result in a failure to run a container if a device is selected which has no DRM device nodes (in /dev/dri) or NVIDIA Cap devices (in /dev/nvidia-caps) associated with it. The workaround is to remove the following createContainer hook:

  - args:
    - nvidia-ctk
    - hook
    - chmod
    - --mode
    - "755"
    - --path
    - /dev/dri
    hookName: createContainer
    path: /usr/bin/nvidia-ctk

from the generated CDI specification or select a device that includes associated DRM nodes at /dev/dri and / or NVIDIA Caps devices at /dev/nvidia-caps.

v1.12.0

06 Feb 10:49

Choose a tag to compare

This is a promotion of the v1.12.0-rc.5 release to GA.

This release of the NVIDIA Container Toolkit v1.12.0 adds the following features:

  • Improved support for headless Vulkan applications in containerized environments.
  • Tooling to generate Container Device Interface (CDI) specifications for GPU devices. The use of CDI is now the recommended mechanism for using GPUs in podman.

NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:

The packages for this release are published to the libnvidia-container package repositories.

Full Changelog: v1.11.0...v1.12.0

v1.12.0

Changes for the container-toolkit container

  • Update CUDA base images to 12.0.1

Changes from libnvidia-container v1.12.0

  • Add nvcubins.bin to DriverStore components under WSL2

v1.12.0-rc.5

  • Fix bug here the nvidia-ctk path was not properly resolved. This causes failures to run containers when the runtime is configured in csv mode or if the NVIDIA_DRIVER_CAPABILITIES includes graphics or display (e.g. all).

v1.12.0-rc.4

  • Generate a minimum CDI spec version for improved compatibility.
  • Add --device-name-strategy [index | uuid | type-index] options to the nvidia-ctk cdi generate command that can be used to control how device names are constructed.
  • Set default for CDI device name generation to index to generate device names such as nvidia.com/gpu=0 or nvidia.com/gpu=1:0 by default. NOTE: This is a breaking change and will cause a v0.5.0 CDI specification to be generated. To keep the previous generate a v0.4.0 CDI specification with nvidia.com/gpu=gpu0 or nvidia.com/gpu=mig1:0 device names use the type-index option.
  • Ensure that nivdia-container-toolkit package can be upgraded to from versions older than v1.11.0 on RPM-based systems.

v1.12.0-rc.3

  • Don't fail if by-path symlinks for DRM devices do not exist
  • Replace the --json flag with a --format [json|yaml] flag for the nvidia-ctk cdi generate command
  • Ensure that the CDI output folder is created if required
  • When generating a CDI specification use a blank host path for devices to ensure compatibility with the v0.4.0 CDI specification
  • Add injection of Wayland JSON files
  • Add GSP firmware paths to generated CDI specification
  • Add --root flag to nvidia-ctk cdi generate command to allow for a non-standard driver root to be specified

v1.12.0-rc.2

  • Update golang version to 1.18
  • Inject Direct Rendering Manager (DRM) devices into a container using the NVIDIA Container Runtime
  • Improve logging of errors from the NVIDIA Container Runtime
  • Improve CDI specification generation to support rootless podman
  • Use nvidia-ctk cdi generate to generate CDI specifications instead of nvidia-ctk info generate-cdi

Changes from libnvidia-container v1.12.0-rc.2

  • Skip creation of existing files when mounting them from the host

v1.12.0-rc.1

  • Improve injection of Vulkan configurations and libraries
  • Add nvidia-ctk info generate-cdi command to generated CDI specification for available devices

Changes for the container-toolkit container

  • Update CUDA base images to 11.8.0

Changes from libnvidia-container v1.12.0-rc.1

  • Add NVVM Compiler Library (libnvidia-nvvm.so) to list of compute libraries

v1.12.0-rc.5

02 Feb 10:37

Choose a tag to compare

v1.12.0-rc.5 Pre-release
Pre-release
  • Fix bug here the nvidia-ctk path was not properly resolved. This causes failures to run containers when the runtime is configured in csv mode or if the NVIDIA_DRIVER_CAPABILITIES includes graphics or display (e.g. all).

v1.12.0-rc.4

02 Feb 10:34

Choose a tag to compare

v1.12.0-rc.4 Pre-release
Pre-release
  • Generate a minimum CDI spec version for improved compatibility.
  • Add --device-name-strategy [index | uuid | type-index] options to the nvidia-ctk cdi generate command that can be used to control how device names are constructed.
  • Set default for CDI device name generation to index to generate device names such as nvidia.com/gpu=0 or nvidia.com/gpu=1:0 by default. NOTE: This is a breaking change and will cause a v0.5.0 CDI specification to be generated. To keep the previous generate a v0.4.0 CDI specification with nvidia.com/gpu=gpu0 or nvidia.com/gpu=mig1:0 device names use the type-index option.
  • Ensure that nivdia-container-toolkit package can be upgraded to from versions older than v1.11.0 on RPM-based systems.

v1.12.0-rc.3

02 Feb 10:31

Choose a tag to compare

v1.12.0-rc.3 Pre-release
Pre-release
  • Don't fail if by-path symlinks for DRM devices do not exist
  • Replace the --json flag with a --format [json|yaml] flag for the nvidia-ctk cdi generate command
  • Ensure that the CDI output folder is created if required
  • When generating a CDI specification use a blank host path for devices to ensure compatibility with the v0.4.0 CDI specification
  • Add injection of Wayland JSON files
  • Add GSP firmware paths to generated CDI specification
  • Add --root flag to nvidia-ctk cdi generate command to allow for a non-standard driver root to be specified

v1.12.0-rc.2

22 Nov 13:07

Choose a tag to compare

v1.12.0-rc.2 Pre-release
Pre-release
  • Update golang version to 1.18
  • Inject Direct Rendering Manager (DRM) devices into a container using the NVIDIA Container Runtime
  • Improve logging of errors from the NVIDIA Container Runtime
  • Improve CDI specification generation to support rootless podman
  • Use nvidia-ctk cdi generate to generate CDI specifications instead of nvidia-ctk info generate-cdi

Changes from libnvidia-container v1.12.0-rc.2

  • Skip creation of existing files when mounting them from the host

v1.12.0-rc.1

10 Oct 15:10

Choose a tag to compare

v1.12.0-rc.1 Pre-release
Pre-release
  • Improve injection of Vulkan configurations and libraries
  • Add nvidia-ctk info generate-cdi command to generated CDI specification for available devices

Changes for the container-toolkit container

  • Update CUDA base images to 11.8.0

Changes from libnvidia-container v1.12.0-rc.1

  • Add NVVM Compiler Library (libnvidia-nvvm.so) to list of compute libraries

v1.11.0

14 Sep 14:43

Choose a tag to compare

This is a promotion of the v1.11.0-rc.3 release to GA.

This release of the NVIDIA Container Toolkit v1.11.0 is primarily targeted at adding support for injection of GPUDirect Storage and MOFED devices into containerized environments.

NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:

NOTE: This release does not include an update to nvidia-docker2 and is compatible with nvidia-docker2 2.11.0.

The packages for this release are published to the libnvidia-container package repositories.

1.11.0-rc.3

  • Build fedora35 packages
  • Introduce an nvidia-container-toolkit-base package for better dependency management
  • Fix removal of nvidia-container-runtime-hook on RPM-based systems
  • Inject platform files into container on Tegra-based systems

NOTE: When upgrading from(or downgrading to) another 1.11.0-rc.* version it may be required to remove the nvidia-container-toolkit or nvidia-container-toolkit-base package(s) manually. This is due to the introduction of the nvidia-container-toolkit-base package which now provides the configuration file for the NVIDIA Container Toolkit. Upgrades from or downgrades to older versions of the NVIDIA Container Toolkit (i.e. <= 1.10.0) should work as expected.

Changes for the container-toolkit container

  • Update CUDA base images to 11.7.1
  • Fix bug in setting of toolkit accept-nvidia-visible-devices-* config options introduced in v1.11.0-rc.2.

Changes from libnvidia-container v1.11.0-rc.3

  • Preload libgcc_s.so.1 on arm64 systems

1.11.0-rc.2

Changes for the container-toolkit container

  • Allow accept-nvidia-visible-devices-* config options to be set by toolkit container

Changes from libnvidia-container v1.11.0-rc.2

  • Fix bug where LDCache was not updated when the --no-pivot-root option was specified

1.11.0-rc.1

  • Add cdi mode to NVIDIA Container Runtime
  • Add discovery of GPUDirect Storage (nvidia-fs*) devices if the NVIDIA_GDS environment variable of the container is set to enabled
  • Add discovery of MOFED Infiniband devices if the NVIDIA_MOFED environment variable of the container is set to enabled
  • Fix bug in CSV mode where libraries listed as sym entries in mount specification are not added to the LDCache.
  • Rename nvidia-contianer-toolkit executable to nvidia-container-runtime-hook and create nvidia-container-toolkit as a symlink to nvidia-container-runtime-hook instead.
  • Add nvidia-ctk runtime configure command to configure the Docker config file (e.g. /etc/docker/daemon.json) for use with the NVIDIA Container Runtime.