Skip to content

Migrate ECS to CDI #482

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 12, 2025
Merged

Conversation

arnaldo2792
Copy link
Contributor

Issue number:

Related to #470

Description of changes:

This series of commits enable CDI for ECS.

A new package nvidia-container-toolkit-cdi-specs provides an additional service to generate CDI specifications. This new service can be used by downstream builders that support CDI. This new package is automatically installed for ECS variants.

The nvidia-container-toolkit configuration for ECS now forces the cdi mode. The reason for this is twofold:

  • This truly enables CDI, and drops the dependency on the prestart hook provided by nvidia-container-toolkit
  • This mode allows to use the NVIDIA_VISIBLE_DEVICES environment variable, which is what the ECS agent uses to "pass down" the devices that must be configured in the container

The last commit in the series adds a patch for nvidia-container-toolkit. The changes in #471 didn't work because nvidia-ctk failed to generate the CDI specifications for aarch64 hosts. This is because the module that parses ldcache is missing support for aarch64 hosts (see NVIDIA/nvidia-container-toolkit#1045). This is already being fixed upstream (see NVIDIA/nvidia-container-toolkit#1046).

Testing done:

With aws-ecs-2-nvidia both x86_64 and aarch64 instances (g4dn.2xlarge and g5g.2xlarge):

  • Instances booted (previously, aarch64 hosts failed to boot)
  • Launched the nvida-smoke test defined in testsys:
[root@a8bd83522145 /]# uname -a
Linux a8bd83522145 6.1.132 #1 SMP Mon Apr 21 16:58:15 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux

[root@a8bd83522145 /]# find /samples -mindepth 1 -exec {} \;
GPU Device 0: "Turing" with compute capability 7.5

Running ........................................................

Overall Time For matrixMultiplyPerf

Printing Average of 20 measurements in (ms)
Size_KB  UMhint UMhntAs  UMeasy   0Copy MemCopy CpAsync CpHpglk CpPglAs
4         0.180   0.209   0.312   0.018   0.032   0.029   0.037   0.029
16        0.206   0.241   0.406   0.042   0.055   0.046   0.061   0.057
64        0.346   0.354   0.694   0.135   0.142   0.132   0.142   0.132
256       0.849   0.793   1.196   0.744   0.529   0.506   0.493   0.489
1024      2.961   2.679   3.433   4.777   2.129   2.077   1.912   1.899
4096     11.248  10.239  13.502  35.709   8.743   8.661   8.548   8.512
16384    50.874  48.221  60.778 334.087  42.651  42.575  42.570  42.579

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
/samples/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA T4G"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    7.5
  Total amount of global memory:                 14931 MBytes (15655829504 bytes)
  (040) Multiprocessors, (064) CUDA Cores/MP:    2560 CUDA Cores
  GPU Max Clock rate:                            1590 MHz (1.59 GHz)
  Memory Clock rate:                             5001 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 4194304 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        65536 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 31
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.2, CUDA Runtime Version = 11.4, NumDevs = 1
Result = PASS
[globalToShmemAsyncCopy] - Starting...
GPU Device 0: "Turing" with compute capability 7.5

MatrixA(1280,1280), MatrixB(1280,1280)
Running kernel = 0 - AsyncCopyMultiStageLargeChunk
Computing result using CUDA Kernel...
done
Performance= 336.17 GFlop/s, Time= 12.477 msec, Size= 4194304000 Ops, WorkgroupSize= 256 threads/block
Checking computed result for correctness: Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Initializing...
GPU Device 0: "Turing" with compute capability 7.5

M: 4096 (16 x 256)
N: 4096 (16 x 256)
K: 4096 (16 x 256)
Preparing data for GPU...
Required shared memory size: 64 Kb
Computing... using high performance kernel compute_gemm_imma
Time: 5.052672 ms
TOPS: 27.20
reductionMultiBlockCG Starting...

GPU Device 0: "Turing" with compute capability 7.5

33554432 elements
numThreads: 1024
numBlocks: 40

Launching SinglePass Multi Block Cooperative Groups kernel
Average time: 0.917629 ms
Bandwidth:    146.265758 GB/s

GPU result = 1.992401361465
CPU result = 1.992401361465
Starting shfl_scan
GPU Device 0: "Turing" with compute capability 7.5

> Detected Compute SM 7.5 hardware with 40 multi-processors
Starting shfl_scan
GPU Device 0: "Turing" with compute capability 7.5

> Detected Compute SM 7.5 hardware with 40 multi-processors
Computing Simple Sum test
---------------------------------------------------
Initialize test data [1, 1, 1...]
Scan summation for 65536 elements, 256 partial sums
Partial summing 256 elements with 1 blocks of size 256
Test Sum: 65536
Time (ms): 0.024288
65536 elements scanned in 0.024288 ms -> 2698.287109 MegaElements/s
CPU verify result diff (GPUvsCPU) = 0
CPU sum (naive) took 0.053610 ms

Computing Integral Image Test on size 1920 x 1080 synthetic data
---------------------------------------------------
Method: Fast  Time (GPU Timer): 0.051968 ms Diff = 0
Method: Vertical Scan  Time (GPU Timer): 0.110240 ms
CheckSum: 2073600, (expect 1920x1080=2073600)
/samples/simpleAWBarrier starting...
GPU Device 0: "Turing" with compute capability 7.5

Launching normVecByDotProductAWBarrier kernel with numBlocks = 40 blockSize = 1024
Result = PASSED
/samples/simpleAWBarrier completed, returned OK
simpleAtomicIntrinsics starting...
GPU Device 0: "Turing" with compute capability 7.5

Processing time: 115.209999 (ms)
simpleAtomicIntrinsics completed, returned OK
[simpleVoteIntrinsics]
GPU Device 0: "Turing" with compute capability 7.5

> GPU device has 40 Multi-Processors, SM 7.5 compute capabilities

[VOTE Kernel Test 1/3]
        Running <<Vote.Any>> kernel1 ...
        OK

[VOTE Kernel Test 2/3]
        Running <<Vote.All>> kernel2 ...
        OK

[VOTE Kernel Test 3/3]
        Running <<Vote.Any>> kernel3 ...
        OK
        Shutting down...
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done
GPU Device 0: "Turing" with compute capability 7.5

CPU max matches GPU max

Warp Aggregated Atomics PASSED
[root@a8bd83522145 /]#

As an extra precaution, I diffed the libraries and binaries mounted on the containers, to make sure we aren't missing any library with the migration. I confirmed that the libraries are the same. Left CDI, Right Legacy:

/aarch64-bottlerocket-linux-gnu/sys-root/usr/bin/nvidia-debug	/aarch64-bottlerocket-linux-gnu/sys-root/usr/bin/nvidia-debug
/aarch64-bottlerocket-linux-gnu/sys-root/usr/bin/nvidia-persi	/aarch64-bottlerocket-linux-gnu/sys-root/usr/bin/nvidia-persi
/aarch64-bottlerocket-linux-gnu/sys-root/usr/bin/nvidia-smi	/aarch64-bottlerocket-linux-gnu/sys-root/usr/bin/nvidia-smi
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/firmware/nvi |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/firmware/nvi
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/firmware/nvi |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/firmware/nvi
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/driver/nvidia/gpus/0000:00:1f.0
/aarch64-bottlerocket-linux-gnu/sys-root/usr/lib/nvidia/tesla |	/nvidia0
							      >	/nvidiactl
							      >	/nvidia-modeset
/nvidia-persistenced/socket					/nvidia-persistenced/socket
							      >	/nvidia-uvm
							      >	/nvidia-uvm-tools

Reviewers will notice the missing /driver/nvidia/gpus/0000:00:1f.0, /nvidia0, /nvidiactl, /nvidia-modeset, /nvidia-uvm, /nvidia-uvm-tools mounts in CDI. This is because instead of mounting the devices, in CDI the devices are actually passed as Linux devices configured in the OCI specification. So instead of mounts, the devices are created by the container runtime under /dev :

[root@a8bd83522145 /]# ls -la /dev/ | grep nvidia
crw-rw-rw-. 1 root root 195, 254 Apr 24 02:25 nvidia-modeset
crw-rw-rw-. 1 root root 240,   0 Apr 24 02:25 nvidia-uvm
crw-rw-rw-. 1 root root 240,   1 Apr 24 02:25 nvidia-uvm-tools
crw-rw-rw-. 1 root root 195,   0 Apr 24 02:25 nvidia0
crw-rw-rw-. 1 root root 195, 255 Apr 24 02:25 nvidiactl
[root@a8bd83522145 /]#

Without these devices, workloads that depend on cuda wouldn't run (see this comment)

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

The cdi-specs package provides additional files required for CDI to work in
other orchestrators like ECS, or any other downstream builder that
supports CDI.

Signed-off-by: Arnaldo Garcia Rincon <[email protected]>
Force the CDI mode for the ECS configuration

Signed-off-by: Arnaldo Garcia Rincon <[email protected]>
Enable the CDI feature and provide the CDI directories that should be
used to look for CDI specifications

Signed-off-by: Arnaldo Garcia Rincon <[email protected]>
The architecture flag for aarch64 is currently missing from the
supported architecture flags list. This omission causes the getEntries
function to exclude all libraries found on aarch64 hosts. As a result,
helper programs like nvidia-ctk are unable to generate CDI
specifications for the aarch64 architecture.

This fix adds the missing aarch64 architecture flag, using the same
value as defined in libnvidia-container[1], which maintains a more
comprehensive list of supported architectures.

[1]: https://github.com/NVIDIA/libnvidia-container/blob/a198166e1c1166f4847598438115ea97dacc7a92/src/ldcache.h#L21

Signed-off-by: Arnaldo Garcia Rincon <[email protected]>
@@ -3,3 +3,6 @@ root = "/"
path = "/usr/bin/nvidia-container-cli"
environment = []
ldconfig = "@/sbin/ldconfig"

[nvidia-container-runtime]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a general question: Would using native support in some other runtime (e.g. containerd) be an option for bottlerocket at some point? Or would we need to build CDI support into bottlerocket directly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @elezar , thanks for the interest in this PR!

We do have another PR (#459) to set up containerd as well for Kubernetes.

There are four use cases which require the usage of nvidia-container-runtime:

  1. The ECS agent passes down the devices through the env variable NVIDIA_VISIBLE_DEVICES, which containerd doesn't know about. Luckily, nvidia-container-runtime is aware of this environment variable, and performs the CDI modifications accordingly
  2. For k8s, we have seen users rely on setting NVIDIA_VISIBLE_DEVICES to all to bypass the resource restrictions and get access to all the GPUs. This use case is similar to 1), and as with it, containerd doesn't know about the environment variable. We do advise against this see Security Guidance.
  3. Bottlerocket supports k8s 1.28, 1.29, 1.30, which don't support CDI, so we need the nvidia-container-toolkit to inject the prestart hook. We were doing something else to inject the prestart hook but it was unnecessarily complicated. Using nvidia-container-runtime is much simpler. See Packages: Drop shimpei oci add hook #458 if you are curious about what we used to do.
  4. Support "management" containers, e.g. dcgm-exporter that require access to all GPUs by way of NVIDIA_VISIBLE_DEVICES=all. For this use case, I found the CDI specifications generated by the k8s device plugin were missing the all device (see: Generate CDI specifications for the "all" device NVIDIA/k8s-device-plugin#1203), and caused failures in said containers.

For the normal, happy path use case, where users just use normal Kubernetes directives to configure their resources we do rely on containerd's native support for CDI (as you can see in #459). Its the NVIDIA_VISIBLE_DEVICES support what we need nvidia-container-runtime for .

You all don't have to add any CDI support for Bottlerocket (although contributions are welcome!). We use the tools you all provide to generate the CDI specifications, nvidia-ctk for ECS and the Device Plugin for k8s. Besides that, we try to stay as close as possible to what you all do for other distros, and we try not to deviate too much unless strictly required.

Copy link
Contributor

@KCSesh KCSesh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM - has the changes I expected from #471 and fixes the ARM issue.

"features": {
"cdi": true
},
"cdi-spec-dirs": ["/etc/cdi/"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: is there a reason that we ignore /var/run/cdi? Is this folder not applicable on Bottlerocket?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For ECS, yeah, there is no need to read CDI specs from /var/run/cdi as the only CDI specs provider will be nvidia-ctk.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ArangoGutierrez this service definition (and the repo in general) might be interesting from the point of view of your current work.

@arnaldo2792 arnaldo2792 merged commit ed9f5b0 into bottlerocket-os:develop May 12, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants