-
Notifications
You must be signed in to change notification settings - Fork 118
Open
Labels
kind/bugCategorizes issue or PR as related to a bugCategorizes issue or PR as related to a bug
Milestone
Description
What steps did you take and what happened:
Resize of ext4/xfs volume leads into a filesystem shutdown.
At this point we have to delete the pod and recreate.
Related to this, it's not possible to delete the pod gracefully, must manually umount the mount path.
(see #615)
What did you expect to happen:
I expected resize to work with no hickups.
The output of the following commands will help us better understand what's going on:
kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin
I0121 10:51:10.704941 1 zfs_util.go:766] Running [ zfs ] [set volsize=8589934592 zfspv-pool/pvc-3d930441-5fc0-4ef2-9133-7a325bb5b97d]
I0121 10:51:10.766100 1 resize.go:42] Running [ xfs_growfs ] /var/lib/kubelet/pods/8a928963-f016-46c2-8512-05a561ed1f6a/volumes/kubernetes.io~csi/pvc-3d930441-5fc0-4ef2-9133-7a325bb5b97d/mount
E0121 10:51:10.773620 1 resize.go:46] zfspv: ResizeXFS failed error: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Input/output error
meta-data=/dev/zd0 isize=512 agcount=4, agsize=327680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1 spinodes=0 rmapbt=0
= reflink=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
dmesg
[ 323.276471] XFS (zd0): Block device removal (0x20) detected at fs_bdev_mark_dead+0x59/0x70 (fs/xfs/xfs_super.c:1177). Shutting down filesystem.
[ 323.276587] XFS (zd0): Please unmount the filesystem and rectify the problem(s)
zpool history:
[root@nixos:/zfs]# zpool history
History for 'zfspv-pool':
2025-01-21.15:15:41 zpool create zfspv-pool /dev/loop0
2025-01-21.15:18:32 zfs create -s -V 5368709120 -o dedup=on -o compression=zstd-6 zfspv-pool/pvc-f5e204ba-3797-47f9-bfe7-236d078292f7
2025-01-21.15:18:43 zfs snapshot zfspv-pool/pvc-f5e204ba-3797-47f9-bfe7-236d078292f7@snapshot-5b5286c5-0a75-4727-ae75-ad7ef22f0c34
2025-01-21.15:18:48 zfs clone -o dedup=on -o compression=zstd-6 zfspv-pool/pvc-f5e204ba-3797-47f9-bfe7-236d078292f7@snapshot-5b5286c5-0a75-4727-ae75-ad7ef22f0c34 zfspv-pool/pvc-105973ea-93af-4153-958a-31bb60ba9b58
2025-01-21.15:19:56 zfs set volsize=8589934592 zfspv-pool/pvc-f5e204ba-3797-47f9-bfe7-236d078292f7
Anything else you would like to add:
I was able to get it working by reverting to an older kernel version: 6.1
Environment:
- LocalPV-ZFS version:
2.7.0-develop - Cloud provider or hardware configuration:
qemu vm x86_64 - OS (e.g. from
/etc/os-release):NixOS 24.11 (Vicuna) - Kernel version:
Linux nixos 6.6.69
Question
This may be a kernel bug?
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bugCategorizes issue or PR as related to a bug