-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
System information
| Type | Version/Name |
|---|---|
| Distribution Name | Proxmox (Debian) |
| Distribution Version | 9.1.4 (13/Trixie) |
| Kernel Version | 6.17.4-2-pve |
| Architecture | amd64 |
| OpenZFS Version | 2.3.4 |
Describe the problem you're observing
Unable to delete a snapshot. The command hangs indefinitely or until the system is rebooted.
The system console shows "PANIC: zfs: rt={spa=raidpool vdev_guid=7096899609358038343 ms_id=88 ms_unflushed_frees}: adding segment (offset=1607f60000 size=1000) overlapping with existing one (offset=1607f760000 size=1000)"
After a while the snapshot disappears from zfs list -t snapshot, but the command never returns and upon a reboot of the system the same snapshot returns.
note: This is the only snapshot on the pool.
NAME USED AVAIL REFER MOUNTPOINT
raidpool/syncthing@2026-01-02_01.05.01--3d 44.4M - 110G -
I have run a scrub on the pool, but found no errors
pool: raidpool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 11:01:16 with 0 errors on Sun Jan 4 15:38:38 2026
config:
NAME STATE READ WRITE CKSUM
raidpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-ST4000VN006-3CW104_ZW62P4HX ONLINE 0 0 0
ata-ST4000VN006-3CW104_ZW62P4EW ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-ST4000VN006-3CW104_ZW62P1SM ONLINE 0 0 0
ata-ST4000VN006-3CW104_ZW62P2QS ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
ata-ST4000VN006-3CW104_ZW639R7B ONLINE 0 0 0
ata-ST4000VN006-3CW104_ZW63A8J6 ONLINE 0 0 0
note: I cannot update the pool version, as this command also hangs. I would guess it may have something to do with this snapshot.
zfs-2.3.4-pve1
zfs-kmod-2.3.4-pve1
I see other issues opened for adding segment (offset=x size=y) overlapping with existing one (offset=x size=y) such as #17805 but they seem to have unique circumstances that do not match my situation (not using encryption anywhere, not pool scrub shows no problems, etc) so I opened a separate issue.
Describe how to reproduce the problem
Attempt to delete the snapshot
# zfs destroy raidpool/syncthing@2026-01-02_01.05.01--3d
Include any warning/errors/backtraces from the system logs
[41963.459035] PANIC: zfs: rt={spa=raidpool vdev_guid=7096899609358038343 ms_id=88 ms_unflushed_frees}: adding segment (offset=1607f760000 size=1000) overlapping with existing one (offset=1607f760000 size=1000)
[41963.459070] Showing stack for process 1479
[41963.459074] CPU: 7 UID: 0 PID: 1479 Comm: txg_sync Tainted: P O 6.17.4-2-pve #1 PREEMPT(voluntary)
[41963.459078] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE
[41963.459078] Hardware name: Supermicro Super Server/X10SDV-F, BIOS 2.6 02/05/2024
[41963.459080] Call Trace:
[41963.459081] <TASK>
[41963.459084] dump_stack_lvl+0x5f/0x90
[41963.459090] dump_stack+0x10/0x18
[41963.459093] spl_dumpstack+0x28/0x40 [spl]
[41963.459106] vcmn_err.cold+0x54/0x8d [spl]
[41963.459117] zfs_panic_recover+0x74/0xa0 [zfs]
[41963.459345] zfs_range_tree_add_impl+0x41c/0x1120 [zfs]
[41963.459543] ? dmu_zfetch+0x1f/0xd0 [zfs]
[41963.459743] zfs_range_tree_remove_xor_add_segment+0x53b/0x580 [zfs]
[41963.459956] zfs_range_tree_remove_xor_add+0x91/0x200 [zfs]
[41963.460170] metaslab_sync+0x283/0x950 [zfs]
[41963.460382] ? __pfx_read_tsc+0x10/0x10
[41963.460402] ? ktime_get_raw_ts64+0x3d/0x130
[41963.460406] ? mutex_lock+0x12/0x50
[41963.460409] vdev_sync+0x73/0x4f0 [zfs]
[41963.460583] ? mutex_lock+0x12/0x50
[41963.460585] spa_sync+0x617/0x1070 [zfs]
[41963.460770] ? spa_txg_history_init_io+0x11c/0x130 [zfs]
[41963.460970] txg_sync_thread+0x209/0x3b0 [zfs]
[41963.461153] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
[41963.461334] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[41963.461343] thread_generic_wrapper+0x60/0x80 [spl]
[41963.461350] kthread+0x10b/0x220
[41963.461353] ? __pfx_kthread+0x10/0x10
[41963.461356] ret_from_fork+0x208/0x240
[41963.461358] ? __pfx_kthread+0x10/0x10
[41963.461360] ret_from_fork_asm+0x1a/0x30
[41963.461364] </TASK>
[42149.118160] INFO: task txg_sync:1479 blocked for more than 122 seconds.
[42149.118166] Tainted: P O 6.17.4-2-pve #1
[42149.118168] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[42149.118169] task:txg_sync state:D stack:0 pid:1479 tgid:1479 ppid:2 task_flags:0x288040 flags:0x00004000
[42149.118175] Call Trace:
[42149.118177] <TASK>
[42149.118180] __schedule+0x468/0x1310
[42149.118206] schedule+0x27/0xf0
[42149.118211] vcmn_err.cold+0x6b/0x8d [spl]
[42149.118230] zfs_panic_recover+0x74/0xa0 [zfs]
[42149.118497] zfs_range_tree_add_impl+0x41c/0x1120 [zfs]
[42149.118735] ? dmu_zfetch+0x1f/0xd0 [zfs]
[42149.118944] zfs_range_tree_remove_xor_add_segment+0x53b/0x580 [zfs]
[42149.119191] zfs_range_tree_remove_xor_add+0x91/0x200 [zfs]
[42149.119394] metaslab_sync+0x283/0x950 [zfs]
[42149.119642] ? __pfx_read_tsc+0x10/0x10
[42149.119647] ? ktime_get_raw_ts64+0x3d/0x130
[42149.119670] ? mutex_lock+0x12/0x50
[42149.119674] vdev_sync+0x73/0x4f0 [zfs]
[42149.119863] ? mutex_lock+0x12/0x50
[42149.119866] spa_sync+0x617/0x1070 [zfs]
[42149.120105] ? spa_txg_history_init_io+0x11c/0x130 [zfs]
[42149.120297] txg_sync_thread+0x209/0x3b0 [zfs]
[42149.120497] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
[42149.120691] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[42149.120701] thread_generic_wrapper+0x60/0x80 [spl]
[42149.120709] kthread+0x10b/0x220
[42149.120712] ? __pfx_kthread+0x10/0x10
[42149.120715] ret_from_fork+0x208/0x240
[42149.120718] ? __pfx_kthread+0x10/0x10
[42149.120721] ret_from_fork_asm+0x1a/0x30
[42149.120725] </TASK>
[42271.996808] INFO: task txg_sync:1479 blocked for more than 245 seconds.
[42271.996814] Tainted: P O 6.17.4-2-pve #1
[42271.996816] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[42271.996817] task:txg_sync state:D stack:0 pid:1479 tgid:1479 ppid:2 task_flags:0x288040 flags:0x00004000
[42271.996824] Call Trace:
[42271.996827] <TASK>
[42271.996832] __schedule+0x468/0x1310
[42271.996843] schedule+0x27/0xf0
[42271.996849] vcmn_err.cold+0x6b/0x8d [spl]
[42271.996866] zfs_panic_recover+0x74/0xa0 [zfs]
[42271.997137] zfs_range_tree_add_impl+0x41c/0x1120 [zfs]
[42271.997334] ? dmu_zfetch+0x1f/0xd0 [zfs]
[42271.997530] zfs_range_tree_remove_xor_add_segment+0x53b/0x580 [zfs]
[42271.997777] zfs_range_tree_remove_xor_add+0x91/0x200 [zfs]
[42271.997979] metaslab_sync+0x283/0x950 [zfs]
[42271.998194] ? __pfx_read_tsc+0x10/0x10
[42271.998199] ? ktime_get_raw_ts64+0x3d/0x130
[42271.998204] ? mutex_lock+0x12/0x50
[42271.998207] vdev_sync+0x73/0x4f0 [zfs]
[42271.998421] ? mutex_lock+0x12/0x50
[42271.998423] spa_sync+0x617/0x1070 [zfs]
[42271.998616] ? spa_txg_history_init_io+0x11c/0x130 [zfs]
[42271.998844] txg_sync_thread+0x209/0x3b0 [zfs]
[42271.999027] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
[42271.999211] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[42271.999221] thread_generic_wrapper+0x60/0x80 [spl]
[42271.999228] kthread+0x10b/0x220
[42271.999232] ? __pfx_kthread+0x10/0x10
[42271.999235] ret_from_fork+0x208/0x240
[42271.999238] ? __pfx_kthread+0x10/0x10
[42271.999241] ret_from_fork_asm+0x1a/0x30
[42271.999245] </TASK>
[42394.875328] INFO: task txg_sync:1479 blocked for more than 368 seconds.
[42394.875334] Tainted: P O 6.17.4-2-pve #1
[42394.875336] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[42394.875337] task:txg_sync state:D stack:0 pid:1479 tgid:1479 ppid:2 task_flags:0x288040 flags:0x00004000
[42394.875353] Call Trace:
[42394.875355] <TASK>
[42394.875359] __schedule+0x468/0x1310
[42394.875367] schedule+0x27/0xf0
[42394.875372] vcmn_err.cold+0x6b/0x8d [spl]
[42394.875391] zfs_panic_recover+0x74/0xa0 [zfs]
[42394.875642] zfs_range_tree_add_impl+0x41c/0x1120 [zfs]
[42394.875848] ? dmu_zfetch+0x1f/0xd0 [zfs]
[42394.876073] zfs_range_tree_remove_xor_add_segment+0x53b/0x580 [zfs]
[42394.876297] zfs_range_tree_remove_xor_add+0x91/0x200 [zfs]
[42394.876507] metaslab_sync+0x283/0x950 [zfs]
[42394.876714] ? __pfx_read_tsc+0x10/0x10
[42394.876719] ? ktime_get_raw_ts64+0x3d/0x130
[42394.876723] ? mutex_lock+0x12/0x50
[42394.876726] vdev_sync+0x73/0x4f0 [zfs]
[42394.876914] ? mutex_lock+0x12/0x50
[42394.876916] spa_sync+0x617/0x1070 [zfs]
[42394.877143] ? spa_txg_history_init_io+0x11c/0x130 [zfs]
[42394.877386] txg_sync_thread+0x209/0x3b0 [zfs]
[42394.877579] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
[42394.877783] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[42394.877793] thread_generic_wrapper+0x60/0x80 [spl]
[42394.877801] kthread+0x10b/0x220
[42394.877805] ? __pfx_kthread+0x10/0x10
[42394.877808] ret_from_fork+0x208/0x240
[42394.877810] ? __pfx_kthread+0x10/0x10
[42394.877813] ret_from_fork_asm+0x1a/0x30
[42394.877817] </TASK>
[42517.754990] INFO: task txg_sync:1479 blocked for more than 491 seconds.
[42517.754996] Tainted: P O 6.17.4-2-pve #1
[42517.754998] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[42517.754999] task:txg_sync state:D stack:0 pid:1479 tgid:1479 ppid:2 task_flags:0x288040 flags:0x00004000
[42517.755004] Call Trace:
[42517.755006] <TASK>
[42517.755010] __schedule+0x468/0x1310
[42517.755018] schedule+0x27/0xf0
[42517.755023] vcmn_err.cold+0x6b/0x8d [spl]
[42517.755038] zfs_panic_recover+0x74/0xa0 [zfs]
[42517.755275] zfs_range_tree_add_impl+0x41c/0x1120 [zfs]
[42517.755501] ? dmu_zfetch+0x1f/0xd0 [zfs]
[42517.755695] zfs_range_tree_remove_xor_add_segment+0x53b/0x580 [zfs]
[42517.755894] zfs_range_tree_remove_xor_add+0x91/0x200 [zfs]
[42517.756129] metaslab_sync+0x283/0x950 [zfs]
[42517.756339] ? __pfx_read_tsc+0x10/0x10
[42517.756344] ? ktime_get_raw_ts64+0x3d/0x130
[42517.756349] ? mutex_lock+0x12/0x50
[42517.756352] vdev_sync+0x73/0x4f0 [zfs]
[42517.756558] ? mutex_lock+0x12/0x50
[42517.756560] spa_sync+0x617/0x1070 [zfs]
[42517.756756] ? spa_txg_history_init_io+0x11c/0x130 [zfs]
[42517.756969] txg_sync_thread+0x209/0x3b0 [zfs]
[42517.757173] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
[42517.757362] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[42517.757372] thread_generic_wrapper+0x60/0x80 [spl]
[42517.757380] kthread+0x10b/0x220
[42517.757384] ? __pfx_kthread+0x10/0x10
[42517.757388] ret_from_fork+0x208/0x240
[42517.757390] ? __pfx_kthread+0x10/0x10
[42517.757393] ret_from_fork_asm+0x1a/0x30
[42517.757397] </TASK>
[42640.633619] INFO: task txg_sync:1479 blocked for more than 614 seconds.
[42640.633629] Tainted: P O 6.17.4-2-pve #1
[42640.633631] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[42640.633633] task:txg_sync state:D stack:0 pid:1479 tgid:1479 ppid:2 task_flags:0x288040 flags:0x00004000
[42640.633641] Call Trace:
[42640.633645] <TASK>
[42640.633650] __schedule+0x468/0x1310
[42640.633662] schedule+0x27/0xf0
[42640.633669] vcmn_err.cold+0x6b/0x8d [spl]
[42640.633695] zfs_panic_recover+0x74/0xa0 [zfs]
[42640.634024] zfs_range_tree_add_impl+0x41c/0x1120 [zfs]
[42640.634329] ? dmu_zfetch+0x1f/0xd0 [zfs]
[42640.634637] zfs_range_tree_remove_xor_add_segment+0x53b/0x580 [zfs]
[42640.634942] zfs_range_tree_remove_xor_add+0x91/0x200 [zfs]
[42640.635240] metaslab_sync+0x283/0x950 [zfs]
[42640.635554] ? __pfx_read_tsc+0x10/0x10
[42640.635562] ? ktime_get_raw_ts64+0x3d/0x130
[42640.635569] ? mutex_lock+0x12/0x50
[42640.635575] vdev_sync+0x73/0x4f0 [zfs]
[42640.635859] ? mutex_lock+0x12/0x50
[42640.635864] spa_sync+0x617/0x1070 [zfs]
[42640.636156] ? spa_txg_history_init_io+0x11c/0x130 [zfs]
[42640.636427] txg_sync_thread+0x209/0x3b0 [zfs]
[42640.636679] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
[42640.636873] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[42640.636886] thread_generic_wrapper+0x60/0x80 [spl]
[42640.636894] kthread+0x10b/0x220
[42640.636899] ? __pfx_kthread+0x10/0x10
[42640.636902] ret_from_fork+0x208/0x240
[42640.636905] ? __pfx_kthread+0x10/0x10
[42640.636908] ret_from_fork_asm+0x1a/0x30
[42640.636913] </TASK>
[42763.511414] INFO: task txg_sync:1479 blocked for more than 737 seconds.
[42763.511420] Tainted: P O 6.17.4-2-pve #1
[42763.511422] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[42763.511423] task:txg_sync state:D stack:0 pid:1479 tgid:1479 ppid:2 task_flags:0x288040 flags:0x00004000
[42763.511428] Call Trace:
[42763.511430] <TASK>
[42763.511434] __schedule+0x468/0x1310
[42763.511442] schedule+0x27/0xf0
[42763.511447] vcmn_err.cold+0x6b/0x8d [spl]
[42763.511463] zfs_panic_recover+0x74/0xa0 [zfs]
[42763.511706] zfs_range_tree_add_impl+0x41c/0x1120 [zfs]
[42763.511935] ? dmu_zfetch+0x1f/0xd0 [zfs]
[42763.512161] zfs_range_tree_remove_xor_add_segment+0x53b/0x580 [zfs]
[42763.512387] zfs_range_tree_remove_xor_add+0x91/0x200 [zfs]
[42763.512622] metaslab_sync+0x283/0x950 [zfs]
[42763.512818] ? __pfx_read_tsc+0x10/0x10
[42763.512823] ? ktime_get_raw_ts64+0x3d/0x130
[42763.512828] ? mutex_lock+0x12/0x50
[42763.512831] vdev_sync+0x73/0x4f0 [zfs]
[42763.513036] ? mutex_lock+0x12/0x50
[42763.513039] spa_sync+0x617/0x1070 [zfs]
[42763.513245] ? spa_txg_history_init_io+0x11c/0x130 [zfs]
[42763.513442] txg_sync_thread+0x209/0x3b0 [zfs]
[42763.513634] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
[42763.513825] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[42763.513836] thread_generic_wrapper+0x60/0x80 [spl]
[42763.513843] kthread+0x10b/0x220
[42763.513847] ? __pfx_kthread+0x10/0x10
[42763.513851] ret_from_fork+0x208/0x240
[42763.513853] ? __pfx_kthread+0x10/0x10
[42763.513856] ret_from_fork_asm+0x1a/0x30
[42763.513860] </TASK>
[42763.514016] INFO: task zfs:946643 blocked for more than 122 seconds.
[42763.514018] Tainted: P O 6.17.4-2-pve #1
[42763.514020] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[42763.514021] task:zfs state:D stack:0 pid:946643 tgid:946643 ppid:18061 task_flags:0x480100 flags:0x00004006
[42763.514025] Call Trace:
[42763.514026] <TASK>
[42763.514028] __schedule+0x468/0x1310
[42763.514033] schedule+0x27/0xf0
[42763.514036] io_schedule+0x4c/0x80
[42763.514039] cv_wait_common+0xb0/0x140 [spl]
[42763.514047] ? __pfx_autoremove_wake_function+0x10/0x10
[42763.514051] __cv_wait_io+0x18/0x30 [spl]
[42763.514058] txg_wait_synced_flags+0xd8/0x130 [zfs]
[42763.514283] ? __pfx_zcp_eval_sig+0x10/0x10 [zfs]
[42763.514480] txg_wait_synced+0x10/0x60 [zfs]
[42763.514664] dsl_sync_task_common+0x13b/0x2f0 [zfs]
[42763.514860] ? __pfx_dsl_null_checkfunc+0x10/0x10 [zfs]
[42763.515053] ? __pfx_zcp_eval_sync+0x10/0x10 [zfs]
[42763.515253] ? __pfx_dsl_null_checkfunc+0x10/0x10 [zfs]
[42763.515473] ? __pfx_zcp_eval_sync+0x10/0x10 [zfs]
[42763.515654] dsl_sync_task_sig+0x14/0x30 [zfs]
[42763.515857] zcp_eval+0x543/0x980 [zfs]
[42763.516040] dsl_destroy_snapshots_nvl.part.0+0x120/0x230 [zfs]
[42763.516266] dsl_destroy_snapshots_nvl+0x36/0x50 [zfs]
[42763.516488] zfs_ioc_destroy_snaps+0x18a/0x1a0 [zfs]
[42763.516666] zfsdev_ioctl_common+0x43a/0x970 [zfs]
[42763.516842] zfsdev_ioctl+0x57/0xf0 [zfs]
[42763.517013] __x64_sys_ioctl+0xa5/0x100
[42763.517018] ? count_memcg_events+0xd7/0x1a0
[42763.517023] x64_sys_call+0x1151/0x2330
[42763.517025] do_syscall_64+0x80/0xa30
[42763.517029] ? set_ptes.isra.0+0x3b/0x90
[42763.517032] ? do_anonymous_page+0x106/0x990
[42763.517035] ? ___pte_offset_map+0x1c/0x180
[42763.517038] ? __handle_mm_fault+0xb55/0xfd0
[42763.517042] ? count_memcg_events+0xd7/0x1a0
[42763.517045] ? handle_mm_fault+0x254/0x370
[42763.517048] ? do_user_addr_fault+0x2f8/0x830
[42763.517051] ? irqentry_exit_to_user_mode+0x2e/0x290
[42763.517055] ? irqentry_exit+0x43/0x50
[42763.517058] ? exc_page_fault+0x90/0x1b0
[42763.517061] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[42763.517064] RIP: 0033:0x726b4a1f88db
[42763.517068] RSP: 002b:00007ffde6a6b090 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[42763.517071] RAX: ffffffffffffffda RBX: 0000000000005a3b RCX: 0000726b4a1f88db
[42763.517073] RDX: 00007ffde6a6b110 RSI: 0000000000005a3b RDI: 0000000000000004
[42763.517074] RBP: 00007ffde6a6e700 R08: 0000726b4a2d3ac0 R09: 0000000000000001
[42763.517076] R10: 0000726b4a2d4180 R11: 0000000000000246 R12: 00007ffde6a6b110
[42763.517077] R13: 0000000000000001 R14: 00007ffde6a6e860 R15: 00007ffde6a6e710
[42763.517080] </TASK>