We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
部署完成后,相应的pod已经正常启动,如下图:
但分配后的容器仍然占用了一整个gpu:
容器的资源配置如下:
resources: limits: cpu: 1 memory: 10Gi nvidia.com/gpu: 1 nvidia.com/gpucores: 30 nvidia.com/gpumem: 300 requests: cpu: 1 memory: 10Gi nvidia.com/gpu: 1 nvidia.com/gpucores: 30 nvidia.com/gpumem: 300
值得注意的是,部署vgpu时,集群上已有一个占用2个gpu的pod。由于某些原因该pod无法终止,这会有影响吗? 总之,现在的vgpu无法生效