v1.14.0
This is a promotion of the (internal) v1.14.0-rc.3 release to GA.
This release of the NVIDIA Container Toolkit adds the following features:
- Improved support for the Container Device Interface (CDI) on Tegra-based systems
- Simplified packaging and distribution. We now only generate
.deband.rpmpackages that are compatible with all supported distributions instead of releasing distributions-specific packagfes.
NOTE: This will be the last release that includes the nvidia-container-runtime and nvidia-docker2 packages.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.14.0nvidia-container-toolkit 1.14.0nvidia-container-runtime 3.14.0nvidia-docker2 2.14.0
The packages for this release are published to the libnvidia-container package repositories.
New Contributors
- @elliotcourant made their first contribution in #61
Full Changelog: v1.13.0...v1.14.0
v1.14.0-rc.3
- Added support for generating OCI hook JSON file to
nvidia-ctk runtime configurecommand. - Remove installation of OCI hook JSON from RPM package.
- Refactored config for
nvidia-container-runtime-hook. - Added a
nvidia-ctk configcommand which supports setting config options using a--setflag. - Added
--library-search-pathoption tonvidia-ctk cdi generatecommand incsvmode. This allows folders where
libraries are located to be specified explicitly. - Updated go-nvlib to support devices which are not present in the PCI device database. This allows the creation of dev/char symlinks on systems with such devices installed.
- Added
UsesNVGPUModuleinfo function for more robust platform detection. This is required on Tegra-based systems where libnvidia-ml.so is also supported.
Changes from libnvidia-container v1.14.0-rc.3
- Generate the
nvc.hheader file automaticallty so that version does not need to be explicitly bumped.
Changes in the toolkit-container
- Set
NVIDIA_VISIBLE_DEVICES=voidto prevent injection of NVIDIA devices and drivers into the NVIDIA Container Toolkit container.
v1.14.0-rc.2
- Fix bug causing incorrect nvidia-smi symlink to be created on WSL2 systems with multiple driver roots.
- Remove dependency on coreutils when installing package on RPM-based systems.
- Create ouput folders if required when running
nvidia-ctk runtime configure - Generate default config as post-install step.
- Added support for detecting GSP firmware at custom paths when generating CDI specifications.
- Added logic to skip the extraction of image requirements if
NVIDIA_DISABLE_REQUIRESis set totrue.
Changes from libnvidia-container v1.14.0-rc.2
- Include Shared Compiler Library (libnvidia-gpucomp.so) in the list of compute libaries.
Changes in the toolkit-container
- Ensure that common envvars have higher priority when configuring the container engines.
- Bump CUDA base image version to 12.2.0.
- Remove installation of nvidia-experimental runtime. This is superceded by the NVIDIA Container Runtime in CDI mode.
v1.14.0-rc.1
- chore(cmd): Fixing minor spelling error. by @elliotcourant in #61
- Add support for updating containerd configs to the
nvidia-ctk runtime configurecommand. - Create file in
etc/ld.so.conf.dwith permissions644to support non-root containers. - Generate CDI specification files with
644permissions to allow rootless applications (e.g. podman) - Add
nvidia-ctk cdi listcommand to show the known CDI devices. - Add support for generating merged devices (e.g.
alldevice) to the nvcdi API. - Use . pattern to locate libcuda.so when generating a CDI specification to support platforms where a patch version is not specified.
- Update go-nvlib to skip devices that are not MIG capable when generating CDI specifications.
- Add
nvidia-container-runtime-hook.pathconfig option to specify NVIDIA Container Runtime Hook path explicitly. - Fix bug in creation of
/dev/charsymlinks by failing operation if kernel modules are not loaded. - Add option to load kernel modules when creating device nodes
- Add option to create device nodes when creating
/dev/charsymlinks
Changes from libnvidia-container v1.14.0-rc.1
- Support OpenSSL 3 with the Encrypt/Decrypt library
Changes in the toolkit-container
- Bump CUDA base image version to 12.1.1.
- Unify environment variables used to configure runtimes.