Open
Description
What happened:
After a volume was manually detached from a VM, two pods were mistakenly using the same Volume as their mounted volume.
What you expected to happen:
In an AWS cluster, this same issue gives this error message:
Warning FailedMount 14s (x6 over 30s) kubelet MountVolume.MountDevice failed for volume "pvc-XXXXX" : rpc error: code = Internal desc = Failed to find device path /dev/xvdaa. refusing to mount /dev/nvme3n1 because it claims to be volX but should be volY
I would expect this to be handled in a similar manner.
How to reproduce it:
- Create a pod that mounts a PVC.
- After the pod is running, manually detach the disk using the azure portal. (The pod will still show as running)
- Create another pod that mounts a PVC and assign it to the same node.
- Both pods should be running at this point.
- Delete & recreate the first pod
- The pod should go into a running state without the volume attaching
- At this point, they will both be using the same Volume
- To verify you can exec into both pods, create a file in the mounted directory in one and verify that it's shown in the other pod
Anything else we need to know?:
Environment:
- CSI Driver version: 1.30.4
- Kubernetes version (use
kubectl version
): 1.28.9 - OS (e.g. from /etc/os-release): Ubuntu 20.04.6 LTS
- Kernel (e.g.
uname -a
): Linux 5.4.0-1138-azure # 145-Ubuntu SMP Fri Aug 30 16:04:18 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux - Install tools:
- Others: