Hi there
I am just testing zfswatcher, and right away I get an issue, maybe because I use a draid pool and maybe because right now it is resilvering...
Platform:
Ubuntu 22.04
zfs-2.1.5-1ubuntu6~22.04.1
zfs-kmod-2.1.5-1ubuntu6
I see this error in the syslog:
Jul 26 16:18:05 box zfswatcher[1013799]: invalid line 7 in status output: scan: resilver (draid3:19d:24c:2s-0) in progress since Mon Jul 24 19:40:55 2023
Here is a view of my pool:
`root@box:~# zpool status aggr0
pool: aggr0
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: scrub repaired 0B in 2 days 10:05:20 with 0 errors on Tue Jul 11 10:29:22 2023
scan: resilver (draid3:19d:24c:2s-0) in progress since Mon Jul 24 19:40:55 2023
334T scanned at 2.13G/s, 306T issued 1.95G/s, 359T total
13.9T resilvered, 92.97% done, 03:22:35 to go
config:
NAME STATE READ WRITE CKSUM
aggr0 DEGRADED 0 0 0
draid3:19d:24c:2s-0 DEGRADED 0 0 0
0a-0 ONLINE 0 0 0 (resilvering)
0a-1 ONLINE 0 0 0 (resilvering)
0a-2 ONLINE 0 0 0 (resilvering)
0a-3 ONLINE 0 0 0 (resilvering)
0a-4 ONLINE 0 0 0 (resilvering)
0a-5 ONLINE 0 0 0 (resilvering)
0a-6 ONLINE 0 0 0 (resilvering)
0a-7 ONLINE 0 0 0 (resilvering)
0a-8 ONLINE 0 0 0 (resilvering)
0a-9 ONLINE 0 0 0 (resilvering)
0a-10 ONLINE 0 0 0 (resilvering)
0a-11 ONLINE 0 0 0 (resilvering)
spare-12 DEGRADED 0 0 0
0a-12 UNAVAIL 3 4 0
draid3-0-0 ONLINE 0 0 0 (resilvering)
0a-13 ONLINE 0 0 0 (resilvering)
0a-14 ONLINE 0 0 0 (resilvering)
0a-15 ONLINE 0 0 0 (resilvering)
0a-16 ONLINE 0 0 0 (resilvering)
0a-17 ONLINE 0 0 0 (resilvering)
0a-18 ONLINE 0 0 0 (resilvering)
0a-19 ONLINE 0 0 0 (resilvering)
0a-20 ONLINE 0 0 0 (resilvering)
0a-21 ONLINE 0 0 0 (resilvering)
0a-22 ONLINE 0 0 0 (resilvering)
0a-23 ONLINE 0 0 0 (resilvering)
special
mirror-1 ONLINE 0 0 0
nvme01-part1 ONLINE 0 0 0
nvme02-part1 ONLINE 0 0 0
cache
sdc1 ONLINE 0 0 0
sdd1 ONLINE 0 0 0
spares
draid3-0-0 INUSE currently in use
draid3-0-1 AVAIL`
The 0a-xx device names are done using the vdev_id.conf file, maybe that can also cause issues?
I'm using a NetApp disk shelf and the numbers match the location in the disk shelf which is nice :-)
`multipath no
topology sas_direct
phys_per_port 4
slot bay
enclosure_symlinks yes
channel 02:00.0 2 0a-
channel 02:00.0 3 0b-
alias nvme01 /dev/disk/by-id/nvme-Seagate_FireCuda_530_ZP2000GM30013_7VR025QC
alias nvme02 /dev/disk/by-id/nvme-Seagate_FireCuda_530_ZP2000GM30013_7VR025Z6`
Hi there
I am just testing zfswatcher, and right away I get an issue, maybe because I use a draid pool and maybe because right now it is resilvering...
Platform:
Ubuntu 22.04
zfs-2.1.5-1ubuntu6~22.04.1
zfs-kmod-2.1.5-1ubuntu6
I see this error in the syslog:
Jul 26 16:18:05 box zfswatcher[1013799]: invalid line 7 in status output: scan: resilver (draid3:19d:24c:2s-0) in progress since Mon Jul 24 19:40:55 2023
Here is a view of my pool:
`root@box:~# zpool status aggr0
pool: aggr0
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: scrub repaired 0B in 2 days 10:05:20 with 0 errors on Tue Jul 11 10:29:22 2023
scan: resilver (draid3:19d:24c:2s-0) in progress since Mon Jul 24 19:40:55 2023
334T scanned at 2.13G/s, 306T issued 1.95G/s, 359T total
13.9T resilvered, 92.97% done, 03:22:35 to go
config:
The 0a-xx device names are done using the vdev_id.conf file, maybe that can also cause issues?
I'm using a NetApp disk shelf and the numbers match the location in the disk shelf which is nice :-)
`multipath no
topology sas_direct
phys_per_port 4
slot bay
enclosure_symlinks yes
channel 02:00.0 2 0a-
channel 02:00.0 3 0b-
alias nvme01 /dev/disk/by-id/nvme-Seagate_FireCuda_530_ZP2000GM30013_7VR025QC
alias nvme02 /dev/disk/by-id/nvme-Seagate_FireCuda_530_ZP2000GM30013_7VR025Z6`