-
Notifications
You must be signed in to change notification settings - Fork 850
Open
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.
Description
/kind bug
What happened?
New volumesnapshotcontents periodically get stuck at ReadyToUse: false, significantly beyond (20+ minutes) being ready/complete in AWS. csi-snapshotter shows the appropriate logs for Creating snapshot for content, but eventually stop, leaving the volumesnapshotcontent not ready. Terminating the leader pod, triggering a new leader, immediately results in the volumesnapshotcontent transitioning to ready.
What you expected to happen?
The volumesnapshotcontents should become ready when the snapshot is ready/complete is aws without having to roll the pods
How to reproduce it (as minimally and precisely as possible)?
- Create a new volumesnapshot
- Get the resulting volumesnapshotcontents to retrieve the snapshotHandle
- Watch the volumesnapshotcontents
- observe the snapshot creation in aws until success
- Volumesnapshotcontent remains readytouse: false
- csi-snapshotter logs stop reporting
Creating snapshot for content
Environment
- Kubernetes version (use
kubectl version):
Client Version: v1.32.3
Kustomize Version: v5.5.0
Server Version: v1.32.8-eks-e386d34
- Driver version:
- chart version 2.49.1
- driver image 1.49.1
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.