Releases: NVIDIA/nvidia-container-toolkit
v1.13.0-rc.2
- Don't fail chmod hook if paths are not injected
- Only create
by-pathsymlinks if CDI devices are actually requested. - Fix possible blank
nvidia-ctkpath in generated CDI specifications - Fix error in postun scriplet on RPM-based systems
- Only check
NVIDIA_VISIBLE_DEVICESfor environment variables if no annotations are specified. - Add
cdi.default-kindconfig option for constructing fully-qualified CDI device names in CDI mode - Add support for
accept-nvidia-visible-devices-envvar-unprivilegedconfig setting in CDI mode - Add
nvidia-container-runtime-hook.skip-mode-detectionconfig option to bypass mode detection. This allowslegacyandcdimode, for example, to be used at the same time. - Add support for generating CDI specifications for GDS and MOFED devices
- Ensure CDI specification is validated on save when generating a spec
- Rename
--discovery-modeargument to--modefornvidia-ctk cdi generate
Changes in the toolkit-container
- Add
--cdi-enabledflag to toolkit config - Install
nvidia-ctkfrom toolkit container - Use installed
nvidia-ctkpath in NVIDIA Container Toolkit config - Bump CUDA base images to 12.1.0
- Set
nvidia-ctkpath in the - Add
cdi.k8s.io/*to set of allowed annotations in containerd config - Generate CDI specification for use in management containers
- Install experimental runtime as
nvidia-container-runtime.experimentalinstead ofnvidia-container-runtime-experimental - Install and configure mode-specific runtimes for
cdiandlegacymodes
Changes from libnvidia-container v1.13.0-rc.2
- Fix segfault on WSL2 systems. This was triggered in the
v1.12.1andv1.13.0-rc.1releases.
Full Changelog: v1.13.0-rc.1...v1.13.0-rc.2
v1.12.1
- Don't fail chmod hook if paths are not injected. Fixes known issue in
v1.12.0release - Fix possible blank
nvidia-ctkpath in generated CDI specifications - Fix error in postun scriplet on RPM-based systems
- Fix missing NVML symbols when running
nvidia-ctkon some platforms [#49] - Discover all
gsb*.binGSP firmware files when generating CDI specification. - Remove
fedora35packaging targets
Changes in toolkit-container
- Install
nvidia-ctkfrom toolkit container - Use installed
nvidia-ctkpath in NVIDIA Container Toolkit config - Bump CUDA base images to 12.1.0
Changes from libnvidia-container v1.12.1
- Include all
gsp*.binfirmware files if present
Full Changelog: v1.12.0...v1.12.1
v1.13.0-rc.1
- Include MIG-enabled devices as GPUs when generating CDI specification
- Fix missing NVML symbols when running
nvidia-ctkon some platforms [#49] - Add CDI spec generation for WSL2-based systems to
nvidia-ctk cdi generatecommand - Add
automode tonvidia-ctk cdi generatecommand to automatically detect a WSL2-based system over a standard NVML-based system. - Add mode-specific (
.cdiand.legacy) NVIDIA Container Runtime binaries for use in the GPU Operator - Discover all
gsb*.binGSP firmware files when generating CDI specification. - Align
.deband.rpmrelease candidate package versions - Remove
fedora35packaging targets
Changes in toolkit-container
- Install
nvidia-container-toolkit-operator-extensionspackage for mode-specific executables. - Allow
nvidia-container-runtime.modeto be set when configuring the NVIDIA Container Toolkit
Changes from libnvidia-container v1.13.0-rc.1
- Include all
gsp*.binfirmware files if present - Align
.deband.rpmrelease candidate package versions - Remove
fedora35packaging targets
Full Changelog: v1.12.0...v1.13.0-rc.1
Known Issues
Failure to run container due to missing /dev/dri and / or /dev/nvidia-caps paths in container
As of v1.12.0 using a CDI Specification generated with the nvidia-ctk cdi generate command may result in a failure to run a container if a device is selected which has no DRM device nodes (in /dev/dri) or NVIDIA Cap devices (in /dev/nvidia-caps) associated with it. The workaround is to remove the following createContainer hook:
- args:
- nvidia-ctk
- hook
- chmod
- --mode
- "755"
- --path
- /dev/dri
hookName: createContainer
path: /usr/bin/nvidia-ctk
from the generated CDI specification or select a device that includes associated DRM nodes at /dev/dri and / or NVIDIA Caps devices at /dev/nvidia-caps.
v1.12.0
This is a promotion of the v1.12.0-rc.5 release to GA.
This release of the NVIDIA Container Toolkit v1.12.0 adds the following features:
- Improved support for headless Vulkan applications in containerized environments.
- Tooling to generate Container Device Interface (CDI) specifications for GPU devices. The use of CDI is now the recommended mechanism for using GPUs in
podman.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.12.0nvidia-container-toolkit 1.12.0nvidia-container-runtime 3.12.0nvidia-docker2 2.12.0
The packages for this release are published to the libnvidia-container package repositories.
Full Changelog: v1.11.0...v1.12.0
v1.12.0
Changes for the container-toolkit container
- Update CUDA base images to 12.0.1
Changes from libnvidia-container v1.12.0
- Add
nvcubins.binto DriverStore components under WSL2
v1.12.0-rc.5
- Fix bug here the
nvidia-ctkpath was not properly resolved. This causes failures to run containers when the runtime is configured incsvmode or if theNVIDIA_DRIVER_CAPABILITIESincludesgraphicsordisplay(e.g.all).
v1.12.0-rc.4
- Generate a minimum CDI spec version for improved compatibility.
- Add
--device-name-strategy [index | uuid | type-index]options to thenvidia-ctk cdi generatecommand that can be used to control how device names are constructed. - Set default for CDI device name generation to
indexto generate device names such asnvidia.com/gpu=0ornvidia.com/gpu=1:0by default. NOTE: This is a breaking change and will cause av0.5.0CDI specification to be generated. To keep the previous generate av0.4.0CDI specification withnvidia.com/gpu=gpu0ornvidia.com/gpu=mig1:0device names use thetype-indexoption. - Ensure that
nivdia-container-toolkitpackage can be upgraded to from versions older thanv1.11.0on RPM-based systems.
v1.12.0-rc.3
- Don't fail if by-path symlinks for DRM devices do not exist
- Replace the
--jsonflag with a--format [json|yaml]flag for thenvidia-ctk cdi generate command - Ensure that the CDI output folder is created if required
- When generating a CDI specification use a blank host path for devices to ensure compatibility with the
v0.4.0CDI specification - Add injection of Wayland JSON files
- Add GSP firmware paths to generated CDI specification
- Add
--rootflag tonvidia-ctk cdi generatecommand to allow for a non-standard driver root to be specified
v1.12.0-rc.2
- Update golang version to 1.18
- Inject Direct Rendering Manager (DRM) devices into a container using the NVIDIA Container Runtime
- Improve logging of errors from the NVIDIA Container Runtime
- Improve CDI specification generation to support rootless podman
- Use
nvidia-ctk cdi generateto generate CDI specifications instead ofnvidia-ctk info generate-cdi
Changes from libnvidia-container v1.12.0-rc.2
- Skip creation of existing files when mounting them from the host
v1.12.0-rc.1
- Improve injection of Vulkan configurations and libraries
- Add
nvidia-ctk info generate-cdicommand to generated CDI specification for available devices
Changes for the container-toolkit container
- Update CUDA base images to 11.8.0
Changes from libnvidia-container v1.12.0-rc.1
- Add NVVM Compiler Library (
libnvidia-nvvm.so) to list of compute libraries
v1.12.0-rc.5
- Fix bug here the
nvidia-ctkpath was not properly resolved. This causes failures to run containers when the runtime is configured incsvmode or if theNVIDIA_DRIVER_CAPABILITIESincludesgraphicsordisplay(e.g.all).
v1.12.0-rc.4
- Generate a minimum CDI spec version for improved compatibility.
- Add
--device-name-strategy [index | uuid | type-index]options to thenvidia-ctk cdi generatecommand that can be used to control how device names are constructed. - Set default for CDI device name generation to
indexto generate device names such asnvidia.com/gpu=0ornvidia.com/gpu=1:0by default. NOTE: This is a breaking change and will cause av0.5.0CDI specification to be generated. To keep the previous generate av0.4.0CDI specification withnvidia.com/gpu=gpu0ornvidia.com/gpu=mig1:0device names use thetype-indexoption. - Ensure that
nivdia-container-toolkitpackage can be upgraded to from versions older thanv1.11.0on RPM-based systems.
v1.12.0-rc.3
- Don't fail if by-path symlinks for DRM devices do not exist
- Replace the
--jsonflag with a--format [json|yaml]flag for thenvidia-ctk cdi generate command - Ensure that the CDI output folder is created if required
- When generating a CDI specification use a blank host path for devices to ensure compatibility with the
v0.4.0CDI specification - Add injection of Wayland JSON files
- Add GSP firmware paths to generated CDI specification
- Add
--rootflag tonvidia-ctk cdi generatecommand to allow for a non-standard driver root to be specified
v1.12.0-rc.2
- Update golang version to 1.18
- Inject Direct Rendering Manager (DRM) devices into a container using the NVIDIA Container Runtime
- Improve logging of errors from the NVIDIA Container Runtime
- Improve CDI specification generation to support rootless podman
- Use
nvidia-ctk cdi generateto generate CDI specifications instead ofnvidia-ctk info generate-cdi
Changes from libnvidia-container v1.12.0-rc.2
- Skip creation of existing files when mounting them from the host
v1.12.0-rc.1
- Improve injection of Vulkan configurations and libraries
- Add
nvidia-ctk info generate-cdicommand to generated CDI specification for available devices
Changes for the container-toolkit container
- Update CUDA base images to 11.8.0
Changes from libnvidia-container v1.12.0-rc.1
- Add NVVM Compiler Library (
libnvidia-nvvm.so) to list of compute libraries
v1.11.0
This is a promotion of the v1.11.0-rc.3 release to GA.
This release of the NVIDIA Container Toolkit v1.11.0 is primarily targeted at adding support for injection of GPUDirect Storage and MOFED devices into containerized environments.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
NOTE: This release does not include an update to nvidia-docker2 and is compatible with nvidia-docker2 2.11.0.
The packages for this release are published to the libnvidia-container package repositories.
1.11.0-rc.3
- Build fedora35 packages
- Introduce an
nvidia-container-toolkit-basepackage for better dependency management - Fix removal of
nvidia-container-runtime-hookon RPM-based systems - Inject platform files into container on Tegra-based systems
NOTE: When upgrading from(or downgrading to) another 1.11.0-rc.* version it may be required to remove the nvidia-container-toolkit or nvidia-container-toolkit-base package(s) manually. This is due to the introduction of the nvidia-container-toolkit-base package which now provides the configuration file for the NVIDIA Container Toolkit. Upgrades from or downgrades to older versions of the NVIDIA Container Toolkit (i.e. <= 1.10.0) should work as expected.
Changes for the container-toolkit container
- Update CUDA base images to 11.7.1
- Fix bug in setting of toolkit
accept-nvidia-visible-devices-*config options introduced inv1.11.0-rc.2.
Changes from libnvidia-container v1.11.0-rc.3
- Preload
libgcc_s.so.1on arm64 systems
1.11.0-rc.2
Changes for the container-toolkit container
- Allow
accept-nvidia-visible-devices-*config options to be set by toolkit container
Changes from libnvidia-container v1.11.0-rc.2
- Fix bug where LDCache was not updated when the
--no-pivot-rootoption was specified
1.11.0-rc.1
- Add
cdimode to NVIDIA Container Runtime - Add discovery of GPUDirect Storage (
nvidia-fs*) devices if theNVIDIA_GDSenvironment variable of the container is set toenabled - Add discovery of MOFED Infiniband devices if the
NVIDIA_MOFEDenvironment variable of the container is set toenabled - Fix bug in CSV mode where libraries listed as
symentries in mount specification are not added to the LDCache. - Rename
nvidia-contianer-toolkitexecutable tonvidia-container-runtime-hookand createnvidia-container-toolkitas a symlink tonvidia-container-runtime-hookinstead. - Add
nvidia-ctk runtime configurecommand to configure the Docker config file (e.g./etc/docker/daemon.json) for use with the NVIDIA Container Runtime.