-
Notifications
You must be signed in to change notification settings - Fork 37
Description
When using Vulkan, I previously had to mount various config files in my container. For example, here are the current config files I am mounting:
nvidia_icd.json: |
{
"file_format_version" : "1.0.0",
"ICD": {
"library_path": "libGLX_nvidia.so.0",
"api_version" : "1.3.224"
}
}
nvidia_layers.json: |
{
"file_format_version" : "1.0.0",
"layer": {
"name": "VK_LAYER_NV_optimus",
"type": "INSTANCE",
"library_path": "libGLX_nvidia.so.0",
"api_version" : "1.3.224",
"implementation_version" : "1",
"description" : "NVIDIA Optimus layer",
"functions": {
"vkGetInstanceProcAddr": "vk_optimusGetInstanceProcAddr",
"vkGetDeviceProcAddr": "vk_optimusGetDeviceProcAddr"
},
"enable_environment": {
"__NV_PRIME_RENDER_OFFLOAD": "1"
},
"disable_environment": {
"DISABLE_LAYER_NV_OPTIMUS_1": ""
}
}
}
10_nvidia.json: |
{
"file_format_version" : "1.0.0",
"ICD" : {
"library_path" : "libEGL_nvidia.so.0"
}
}
I pulled these config files from a matching install of the nvidia driver. Previously, the container toolkit would not mount any of these files which is why I was mounting them myself. Now, from my understanding, if these files are on the host system nvidia container toolkit should mount them correctly. From my read of the r570 and 525 versions bottlerocket just isn't copying any of these files to the base image.
This became especially problematic when r570 started running on my kubernetes cluster and breaking all of my vulkan containers because the version in the icd file MUST match the driver version, which is why it's important that this get preserved in bottlerocket and mounted by the nvidia container toolkit.