Releases: NVIDIA/nvidia-container-toolkit
v1.11.0-rc.3
- Build fedora35 packages
- Introduce an
nvidia-container-toolkit-basepackage for better dependency management - Fix removal of
nvidia-container-runtime-hookon RPM-based systems - Inject platform files into container on Tegra-based systems
NOTE: When upgrading from(or downgrading to) another 1.11.0-rc.* version it may be required to remove the nvidia-container-toolkit or nvidia-container-toolkit-base package(s) manually. This is due to the introduction of the nvidia-container-toolkit-base package which now provides the configuration file for the NVIDIA Container Toolkit. Upgrades from or downgrades to older versions of the NVIDIA Container Toolkit (i.e. <= 1.10.0) should work as expected.
Changes for the container-toolkit container
- Update CUDA base images to 11.7.1
- Fix bug in setting of toolkit
accept-nvidia-visible-devices-*config options introduced inv1.11.0-rc.2.
Changes from libnvidia-container v1.11.0-rc.3
- Preload
libgcc_s.so.1on arm64 systems
v1.11.0-rc.2
Changes for the container-toolkit container
- Allow
accept-nvidia-visible-devices-*config options to be set by toolkit container
Changes from libnvidia-container v1.11.0-rc.2
- Fix bug where LDCache was not updated when the
--no-pivot-rootoption was specified
v1.11.0-rc.1
- Add
cdimode to NVIDIA Container Runtime - Add discovery of GPUDirect Storage (
nvidia-fs*) devices if theNVIDIA_GDSenvironment variable of the container is set toenabled - Add discovery of MOFED Infiniband devices if the
NVIDIA_MOFEDenvironment variable of the container is set toenabled - Fix bug in CSV mode where libraries listed as
symentries in mount specification are not added to the LDCache. - Rename
nvidia-contianer-toolkitexecutable tonvidia-container-runtime-hookand createnvidia-container-toolkitas a symlink tonvidia-container-runtime-hookinstead. - Add
nvidia-ctk runtime configurecommand to configure the Docker config file (e.g./etc/docker/daemon.json) for use with the NVIDIA Container Runtime.
v1.10.0
This is a promotion of the v1.10.0-rc.3 release to GA.
This release of the NVIDIA Container Toolkit v1.10.0 is primarily targeted at improving support for Tegra-based systems.
It sees the introduction of a new mode of operation for the NVIDIA Container Runtime that makes modifications to the incoming OCI runtime
specification directly instead of relying on the NVIDIA Container CLI.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.10.0nvidia-container-toolkit 1.10.0nvidia-container-runtime 3.10.0nvidia-docker2 2.11.0
The packages for this release are published to the libnvidia-container package repositories.
- Update config files to include default settings for
nvidia-container-runtime.modeandnvidia-container-runtime.runtimes - Update
container-toolkitbase image to CUDA 11.7.0 - Switch to
ubuntu20.04for defaultcontainer-toolkitimage - Stop publishing all
centos8andarm64ubuntu18.04container-toolkitimages
1.10.0-rc.3
- Use default config instead of raising an error if config file cannot be found
- Ignore
NVIDIA_REQUIRE_JETPACK*environment variables for requirement checks - Fix bug in detection of Tegra systems where
/sys/devices/soc0/familyis ignored - Fix bug where links to devices were detected as devices
Changes for the container-toolkit container
- Fix bug where runtime binary path was misconfigured for containerd when using v1 of the config file
Changes from libnvidia-container v1.10.0-rc.3
- Fix bug introduced when adding
libcudadebugger.soto list of libraries inv1.10.0-rc.2
1.10.0-rc.2
- Add support for
NVIDIA_REQUIRE_*checks forcudaversion andarchtocsvmode - Switch to debug logging to reduce log verbosity
- Support logging to logs requested in command line
- Fix bug when launching containers with relative root path (e.g. using containerd)
- Allow low-level runtime path to be set explicitly as
nvidia-container-runtime.runtimesoption - Fix failure to locate low-level runtime if PATH envvar is unset
- Replace experimental option for NVIDIA Container Runtime with
nvidia-container-runtime.mode = "csv"option - Use
csvas default mode on Tegra systems without NVML - Add
--versionflag to all CLIs
Changes from libnvidia-container v1.10.0-rc.2
- Bump
libtirpcto1.3.2(libnvidia-container#168) - Fix bug when running host
ldconfigusingglibccompiled with a non-standard prefix - Add
libcudadebugger.soto list of compute libraries
1.10.0-rc.1
- Add
nvidia-container-runtime.log-levelconfig option to control the level of logging in the NVIDIA Container Runtime - Add
nvidia-container-runtime.experimentalconfig option that allows for experimental features to be enabled. - Add
nvidia-container-runtime.discover-modeto control how modifications are applied to the incoming OCI runtime specification in experimental mode - Add support for the direct modification of the incoming OCI specification to the NVIDIA Container Runtime; this is targeted at Tegra-based systems with CSV-file based mount specifications.
Changes from libnvidia-container v1.10.0-rc.1
- [WSL2] Fix segmentation fault on WSL2s system with no adapters present (e.g.
/dev/dxgmissing) - Ignore pending MIG mode when checking if a device is MIG enabled
- [WSL2] Fix bug where
/dev/dxgis not mounted whenNVIDIA_DRIVER_CAPABILITIESdoes not include"compute"
v1.10.0-rc.3
- Use default config instead of raising an error if config file cannot be found
- Ignore
NVIDIA_REQUIRE_JETPACK*environment variables for requirement checks - Fix bug in detection of Tegra systems where
/sys/devices/soc0/familyis ignored - Fix bug where links to devices were detected as devices
Changes for the container-toolkit container
- Fix bug where runtime binary path was misconfigured for containerd when using v1 of the config file
Changes from libnvidia-container v1.10.0-rc.3
- Fix bug introduced when adding
libcudadebugger.soto list of libraries inv1.10.0-rc.2
v1.10.0-rc.2
- Add support for
NVIDIA_REQUIRE_*checks forcudaversion andarchtocsvmode - Switch to debug logging to reduce log verbosity
- Support logging to logs requested in command line
- Fix bug when launching containers with relative root path (e.g. using containerd)
- Allow low-level runtime path to be set explicitly as
nvidia-container-runtime.runtimesoption - Fix failure to locate low-level runtime if PATH envvar is unset
- Replace experimental option for NVIDIA Container Runtime with
nvidia-container-runtime.mode = "csv"option - Use
csvas default mode on Tegra systems without NVML - Add
--versionflag to all CLIs
Changes from libnvidia-container v1.10.0-rc.2
- Bump
libtirpcto1.3.2(libnvidia-container#168) - Fix bug when running host
ldconfigusingglibccompiled with a non-standard prefix - Add
libcudadebugger.soto list of compute libraries
v1.10.0-rc.1
- Add
nvidia-container-runtime.log-levelconfig option to control the level of logging in the NVIDIA Container Runtime - Add
nvidia-container-runtime.experimentalconfig option that allows for experimental features to be enabled. - Add
nvidia-container-runtime.discover-modeto control how modifications are applied to the incoming OCI runtime specification in experimental mode - Add support for the direct modification of the incoming OCI specification to the NVIDIA Container Runtime; this is targeted at Tegra-based systems with CSV-file based mount specifications.
Changes from libnvidia-container v1.10.0-rc.1
- [WSL2] Fix segmentation fault on WSL2s system with no adapters present (e.g.
/dev/dxgmissing) - Ignore pending MIG mode when checking if a device is MIG enabled
- [WSL2] Fix bug where
/dev/dxgis not mounted whenNVIDIA_DRIVER_CAPABILITIESdoes not include"compute"
v1.9.0
This release of the NVIDIA Container Toolkit v1.9.0 is primarily targeted at adding multi-arch support for the container-toolkit images. It also includes enhancements for use on Tegra-systems and some notable bugfixes.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.9.0nvidia-container-toolkit 1.9.0nvidia-container-runtime 3.9.0nvidia-docker2 2.10.0
Changes from libnvidia-container 1.9.0
- Add additional check for Tegra in
/sys/.../familyfile in CLI - Update jetpack-specific CLI option to only load Base CSV files by default
- Fix bug (from
v1.8.0) when mounting GSP firmware into containers without/libto/usr/libsymlinks - Update
nvml.hto CUDA 11.6.1 nvML_DEV 11.6.55 - Update switch statement to include new brands from latest
nvml.h - Process all
--requireflags on Jetson platforms - Fix long-standing issue with running ldconfig on Debian systems
v1.8.1
This release is a bugfix release that fixes issues around cgroups found in NVIDIA Container Toolkit 1.8.0.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.8.1nvidia-container-toolkit 1.8.1nvidia-container-runtime 3.8.1nvidia-docker2 2.9.1
Changes from libnvidia-container 1.8.1
- Fix bug in determining cgroup root when running in nested containers
- Fix permission issue when determining cgroup version (see NVIDIA/libnvidia-container#158)
v1.8.0
This is a promotion of the v1.8.0-rc.2 release to GA.
It adds cgroupv2 support to the NVIDIA Container Toolkit and removes packaging support for Amazonlinux1.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.8.0nvidia-container-toolkit 1.8.0nvidia-container-runtime 3.8.0nvidia-docker2 2.9.0
The packages for this release are published to the libnvidia-container package repositories.
1.8.0-rc.2
- Remove support for building
amazonlinux1packages
Changes from libnvidia-container 1.8.0-rc.2
- Include libnvidia-pkcs11.so in compute libraries
- Include firmware paths in list command
- Correct GSP firmware mount permissions
- Fix bug to support cgroupv2 on linux kernels < 5.5
- Fix bug in cgroupv2 logic when in mixed v1 / v2 environment
1.8.0-rc.1
- Release toolkit-container images from nvidia-container-toolkit repository
Changes from libnvidia-container 1.8.0-rc.1
- Add support for cgroupv2