Skip to content

Manual Volume detach case not handled #2537

Open
@CoreyCook8

Description

What happened:

After a volume was manually detached from a VM, two pods were mistakenly using the same Volume as their mounted volume.

What you expected to happen:

In an AWS cluster, this same issue gives this error message:

  Warning  FailedMount  14s (x6 over 30s)  kubelet            MountVolume.MountDevice failed for volume "pvc-XXXXX" : rpc error: code = Internal desc = Failed to find device path /dev/xvdaa. refusing to mount /dev/nvme3n1 because it claims to be volX but should be volY

I would expect this to be handled in a similar manner.

How to reproduce it:

  1. Create a pod that mounts a PVC.
  2. After the pod is running, manually detach the disk using the azure portal. (The pod will still show as running)
  3. Create another pod that mounts a PVC and assign it to the same node.
  4. Both pods should be running at this point.
  5. Delete & recreate the first pod
  6. The pod should go into a running state without the volume attaching
  7. At this point, they will both be using the same Volume
  8. To verify you can exec into both pods, create a file in the mounted directory in one and verify that it's shown in the other pod

Anything else we need to know?:

Environment:

  • CSI Driver version: 1.30.4
  • Kubernetes version (use kubectl version): 1.28.9
  • OS (e.g. from /etc/os-release): Ubuntu 20.04.6 LTS
  • Kernel (e.g. uname -a): Linux 5.4.0-1138-azure # 145-Ubuntu SMP Fri Aug 30 16:04:18 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:

Metadata

Assignees

No one assigned

    Labels

    lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions