-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Open
Labels
bugBug in the syzkaller project (e.g. a crash or misbehavior).Bug in the syzkaller project (e.g. a crash or misbehavior).
Description
Hi,
I'm using syzkaller 1c4febd (Apr 3rd) to fuzz on x86_64 qemu with
"procs": 8,
"fuzzing_vms": 10,
"type": "qemu",
"vm": {
"count": 10,
"kernel": "/local/mnt/workspace/jiangenj/syzkaller/x86_64/linux/arch/x86/boot/bzImage",
"cpu": 2,
"mem": 4096,
"cmdline": "nokaslr"
},
"cover": true,
"reproduce": true,
However, after 11263m, two syz-manager instances (on 1c4febd ) got killed. Another syz-manager instance based on 875573a (March 23) wasn't killed since March 23rd.
Host dmesg shows there were oom happened, one example:
[4461333.408689] qemu-system-x86 invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
[4461333.408700] CPU: 3 PID: 979963 Comm: qemu-system-x86 Tainted: P OE 6.8.0-53-generic #55-Ubuntu
[4461333.408703] Hardware name: Dell Inc. PowerEdge FC640/0CTHW9, BIOS 2.14.2 03/23/2022
[4461333.408705] Call Trace:
[4461333.408709] <TASK>
[4461333.408712] dump_stack_lvl+0x76/0xa0
[4461333.408720] dump_stack+0x10/0x20
[4461333.408723] dump_header+0x47/0x1f0
[4461333.408729] oom_kill_process+0x118/0x280
[4461333.408732] ? oom_evaluate_task+0x143/0x1e0
[4461333.408735] out_of_memory+0x103/0x350
[4461333.408738] __alloc_pages_may_oom+0x10c/0x1d0
[4461333.408744] __alloc_pages_slowpath.constprop.0+0x420/0x9f0
[4461333.408748] __alloc_pages+0x31f/0x350
[4461333.408753] alloc_pages_mpol+0x91/0x210
[4461333.408760] ? kvm_pic_set_irq+0x119/0x260 [kvm]
[4461333.408866] folio_alloc+0x64/0x120
[4461333.408870] ? filemap_get_entry+0xe5/0x160
[4461333.408874] filemap_alloc_folio+0xf4/0x100
[4461333.408877] __filemap_get_folio+0x14b/0x2f0
[4461333.408880] filemap_fault+0x15c/0x8e0
[4461333.408884] __do_fault+0x3a/0x190
[4461333.408888] do_read_fault+0x133/0x200
[4461333.408891] do_fault+0xf0/0x260
[4461333.408894] handle_pte_fault+0x114/0x1d0
[4461333.408896] __handle_mm_fault+0x654/0x800
[4461333.408899] handle_mm_fault+0x18a/0x380
[4461333.408901] do_user_addr_fault+0x169/0x670
[4461333.408905] exc_page_fault+0x83/0x1b0
[4461333.408909] asm_exc_page_fault+0x27/0x30
[4461333.408914] RIP: 0033:0x610a810bce70
[4461333.408957] Code: Unable to access opcode bytes at 0x610a810bce46.
[4461333.408958] RSP: 002b:00007ffeb07f9ac8 EFLAGS: 00010246
[4461333.408961] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffffffffffff
[4461333.408963] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000610aafd73890
[4461333.408964] RBP: 00007ffeb07f9b20 R08: 0000000000000000 R09: 0000000000000000
[4461333.408965] R10: 0000000000000000 R11: 0000000000000000 R12: 00007ffeb07f9ae0
[4461333.408966] R13: 00007ffeb07f9adc R14: 0000610aafd73890 R15: 00007d0fb9bc7000
[4461333.408969] </TASK>
[4461333.408970] Mem-Info:
[4461333.408974] active_anon:17256678 inactive_anon:29053025 isolated_anon:0
active_file:0 inactive_file:62 isolated_file:0
unevictable:5545 dirty:2 writeback:0
slab_reclaimable:40390 slab_unreclaimable:503512
mapped:3875 shmem:2163 pagetables:113626
sec_pagetables:2921 bounce:0
kernel_misc_reclaimable:0
free:127533 free_pcp:343 free_cma:0
[4461333.408980] Node 0 active_anon:27629616kB inactive_anon:63681392kB active_file:0kB inactive_file:564kB unevictable:5796kB isolated(anon):0kB isolated(file):0kB mapped:4408kB dirty:8kB writeback:0kB shmem:5460kB shmem_thp:0kB shmem_pmdmapped:0kB anon_thp:24305664kB writeback_tmp:0kB kernel_stack:12184kB pagetables:170392kB sec_pagetables:7252kB all_unreclaimable? no
[4461333.408986] Node 1 active_anon:41397096kB inactive_anon:52530708kB active_file:0kB inactive_file:0kB unevictable:16384kB isolated(anon):0kB isolated(file):0kB mapped:11092kB dirty:0kB writeback:0kB shmem:3192kB shmem_thp:0kB shmem_pmdmapped:0kB anon_thp:706560kB writeback_tmp:0kB kernel_stack:13944kB pagetables:284112kB sec_pagetables:4432kB all_unreclaimable? yes
[4461333.408991] Node 0 DMA free:11264kB boost:0kB min:4kB low:16kB high:28kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[4461333.408996] lowmem_reserve[]: 0 1521 95176 95176 95176
[4461333.409000] Node 0 DMA32 free:375216kB boost:0kB min:712kB low:2268kB high:3824kB reserved_highatomic:0KB active_anon:1103872kB inactive_anon:139288kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:1690624kB managed:1623360kB mlocked:0kB bounce:0kB free_pcp:740kB local_pcp:0kB free_cma:0kB
[4461333.409005] lowmem_reserve[]: 0 0 93655 93655 93655
[4461333.409009] Node 0 Normal free:60752kB boost:0kB min:43964kB low:139864kB high:235764kB reserved_highatomic:18432KB active_anon:26525744kB inactive_anon:63542104kB active_file:0kB inactive_file:0kB unevictable:5796kB writepending:8kB present:97517568kB managed:95911784kB mlocked:4260kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[4461333.409014] lowmem_reserve[]: 0 0 0 0 0
[4461333.409017] Node 1 Normal free:62900kB boost:0kB min:45420kB low:144496kB high:243572kB reserved_highatomic:47104KB active_anon:41397096kB inactive_anon:52530708kB active_file:0kB inactive_file:0kB unevictable:16384kB writepending:0kB present:100663296kB managed:99077716kB mlocked:14848kB bounce:0kB free_pcp:632kB local_pcp:0kB free_cma:0kB
[4461333.409022] lowmem_reserve[]: 0 0 0 0 0
[4461333.409025] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 2*4096kB (M) = 11264kB
[4461333.409036] Node 0 DMA32: 0*4kB 2*8kB (UM) 0*16kB 2*32kB (UM) 0*64kB 5*128kB (UM) 7*256kB (UM) 6*512kB (UM) 5*1024kB (UM) 2*2048kB (UM) 88*4096kB (M) = 375248kB
[4461333.409048] Node 0 Normal: 283*4kB (UM) 189*8kB (UM) 179*16kB (UM) 1562*32kB (UME) 38*64kB (UM) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 57924kB
[4461333.409060] Node 1 Normal: 559*4kB (UM) 360*8kB (UM) 198*16kB (UM) 1678*32kB (UM) 1*64kB (M) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 62044kB
[4461333.409071] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[4461333.409073] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[4461333.409075] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[4461333.409077] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[4461333.409078] 169162 total pagecache pages
[4461333.409079] 163582 pages in swap cache
[4461333.409081] Free swap = 16kB
[4461333.409081] Total swap = 33554428kB
[4461333.409083] 49971871 pages RAM
[4461333.409084] 0 pages HighMem/MovableOnly
[4461333.409085] 814816 pages reserved
[4461333.409086] 0 pages hwpoisoned
[4461333.409087] Tasks state (memory values in pages):
[4461333.409088] [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
[4461333.409137] [ 989] 0 989 13001 312 312 0 0 114688 0 -250 systemd-journal
[4461333.409142] [ 1055] 0 1055 7496 2486 832 1654 0 90112 0 -1000 systemd-udevd
[4461333.409161] [ 1905] 998 1905 5559 1352 208 1144 0 90112 104 0 systemd-network
[4461333.409165] [ 2010] 110 2010 1992 936 104 832 0 61440 0 0 rpcbind
[4461333.409168] [ 2012] 992 2012 5396 1872 208 1664 0 86016 312 0 systemd-resolve
[4461333.409171] [ 2130] 101 2130 2837 832 104 728 0 61440 0 -900 dbus-daemon
[4461333.409175] [ 2132] 112 2132 1206 416 0 416 0 49152 0 0 rpc.statd
[4461333.409178] [ 2138] 0 2138 1357 416 0 416 0 57344 0 0 fsidd
[4461333.409181] [ 2140] 0 2140 8071 1560 520 1040 0 98304 1768 0 networkd-dispat
[4461333.409184] [ 2145] 991 2145 77040 936 0 936 0 114688 104 0 polkitd
[4461333.409187] [ 2165] 0 2165 10803 312 0 312 0 65536 104 0 rpc.mountd
[4461333.409189] [ 2223] 0 2223 2257 832 104 728 0 61440 0 -1000 tgtd
[4461333.409192] [ 2235] 0 2235 117472 1348 100 1248 0 163840 104 0 udisksd
[4461333.409195] [ 2250] 0 2250 2421 728 0 728 0 61440 0 0 xinetd
[4461333.409198] [ 2257] 0 2257 60347 1144 104 1040 0 114688 104 0 zed
[4461333.409201] [ 2355] 0 2355 9382 1253 421 832 0 147456 0 0 .vasd
[4461333.409204] [ 2375] 0 2375 56247 1144 208 936 0 102400 0 0 rsyslogd
[4461333.409207] [ 2377] 0 2377 1285 312 0 312 0 53248 0 0 blkmapd
[4461333.409209] [ 2378] 0 2378 807 416 0 416 0 53248 0 0 rpc.idmapd
[4461333.409212] [ 2383] 0 2383 1408 416 0 416 0 57344 0 0 nfsdcld
[4461333.409215] [ 2386] 0 2386 1355877 5318 3966 1352 0 995328 832 -999 containerd
[4461333.409218] [ 2404] 0 2404 10187 1644 812 832 0 151552 0 0 .vasd
[4461333.409221] [ 2408] 0 2408 98022 1352 416 936 0 139264 0 0 ModemManager
[4461333.409224] [ 2413] 0 2413 10953 2288 1768 520 0 163840 0 0 .vasd
[4461333.409227] [ 2511] 0 2511 1545563 10118 9286 832 0 1392640 1456 -500 dockerd
[4461333.409231] [ 2519] 0 2519 6209 1699 659 1040 0 98304 520 0 systemd-logind
[4461333.409233] [ 2530] 0 2530 27410 2808 1768 1040 0 118784 520 0 unattended-upgr
[4461333.409237] [ 2716] 0 2716 1101 104 0 104 0 49152 104 0 afsd
[4461333.409241] [ 2829] 0 2829 23231 3120 728 1768 624 180224 0 0 smbd
[4461333.409244] [ 2880] 0 2880 21956 1618 786 832 0 139264 0 0 smbd-notifyd
[4461333.409247] [ 2881] 0 2881 21960 1410 786 624 0 135168 0 0 smbd-cleanupd
[4461333.409253] [ 3247] 0 3247 945 520 0 520 0 53248 0 0 atd
[4461333.409255] [ 3251] 0 3251 166966 2683 1331 1352 0 208896 416 0 automount
[4461333.409258] [ 3288] 0 3288 1526 416 0 416 0 53248 0 0 agetty
[4461333.409261] [ 3419] 0 3419 115473 31149 29693 1456 0 1036288 8632 0 splunkd
[4461333.409264] [ 3421] 0 3421 22834 2850 2122 728 0 143360 0 -1000 splunkd
[4461333.409267] [ 3529] 96855 3529 4879 1694 134 1560 0 86016 624 0 ssh
[4461333.409270] [ 3534] 96855 3534 4737 1872 0 1872 0 94208 520 0 ssh
[4461333.409291] [ 3535] 96855 3535 4433 1872 104 1768 0 86016 208 0 ssh
[4461333.409294] [ 3536] 96855 3536 4433 2080 104 1976 0 77824 208 0 ssh
[4461333.409296] [ 3548] 96855 3548 4433 1976 104 1872 0 86016 104 0 ssh
[4461333.409299] [ 3561] 0 3561 3057 1768 208 1560 0 69632 0 -1000 sshd
[4461333.409302] [ 3562] 0 3562 6756 1876 316 1560 0 98304 0 0 sshd
[4461333.409305] [ 3667] 96855 3667 5302 1768 624 1144 0 81920 104 100 systemd
[4461333.409308] [ 3668] 96855 3668 8306 854 438 416 0 90112 0 100 (sd-pam)
[4461333.409312] [ 3743] 96855 3743 57328 312 0 312 0 86016 0 0 sshfs
[4461333.409314] [ 3762] 96855 3762 94857 608 400 208 0 151552 3748 0 sshfs
[4461333.409317] [ 3766] 96855 3766 6821 1529 385 1144 0 102400 0 0 sshd
[4461333.409321] [ 3767] 96855 3767 57328 208 0 208 0 86016 0 0 sshfs
[4461333.409324] [ 3840] 96855 3840 1345 520 0 520 0 57344 0 0 sftp-server
[4461333.409326] [ 3850] 96855 3850 57328 312 0 312 0 90112 0 0 sshfs
[4461333.409330] [ 3861] 96855 3861 206956 1858 1650 208 0 204800 1352 0 sshfs
[4461333.409333] [ 3868] 96855 3868 57328 312 0 312 0 86016 0 0 sshfs
[4461333.409336] [ 3872] 96855 3872 57328 208 0 208 0 102400 0 0 sshfs
[4461333.409338] [ 3879] 96855 3879 57328 312 0 312 0 98304 0 0 sshfs
[4461333.409341] [ 3914] 96855 3914 206856 208 0 208 0 204800 104 0 sshfs
[4461333.409344] [ 4996] 0 4996 2364 728 0 728 0 61440 0 0 cron
[4461333.409347] [ 10816] 0 10816 3452 459 147 312 0 61440 0 0 iscsid
[4461333.409351] [ 10817] 0 10817 3555 3373 357 3016 0 69632 0 -1000 iscsid
[4461333.409354] [ 133319] 0 133319 146310 30825 28803 2022 0 479232 104 0 fwupd
[4461333.409357] [ 133337] 0 133337 78454 1560 208 1352 0 131072 0 0 upowerd
[4461333.409359] [2927972] 96855 2927972 11039 5805 4765 1040 0 131072 1664 0 tmux: server
[4461333.409363] [2927973] 96855 2927973 2923 1040 312 728 0 73728 104 0 bash
[4461333.409366] [2927974] 96855 2927974 2655 1040 104 936 0 69632 0 200 dbus-daemon
[4461333.409369] [3291052] 0 3291052 6757 1876 316 1560 0 98304 0 0 sshd
[4461333.409372] [3291139] 96855 3291139 6822 1635 491 1144 0 102400 0 0 sshd
[4461333.409375] [3291140] 96855 3291140 1345 416 0 416 0 61440 0 0 sftp-server
[4461333.409378] [3469416] 96855 3469416 1835 624 0 624 0 61440 0 0 run.sh
[4461333.409380] [3469420] 96855 3469420 1835 624 0 624 0 61440 0 0 run-helper.sh
[4461333.409383] [3469424] 96855 3469424 989340 28263 26183 2080 0 770048 6695 0 Runner.Listener
[4461333.409385] [ 468436] 96855 468436 2882 832 104 728 0 65536 312 0 bash
[4461333.409388] [ 468927] 96855 468927 4433 2184 208 1976 0 86016 104 0 ssh
[4461333.409391] [ 468936] 96855 468936 4433 1768 104 1664 0 81920 208 0 ssh
[4461333.409394] [ 485402] 96855 485402 2880 624 0 624 0 69632 312 0 bash
[4461333.409398] [ 939520] 96855 939520 1835 624 0 624 0 53248 0 0 run.sh
[4461333.409400] [ 939535] 96855 939535 812485 1960 1960 0 0 512000 0 0 docker
[4461333.409403] [ 939643] 0 939643 417837 1560 208 1352 0 172032 0 -500 docker-proxy
[4461333.409406] [ 939658] 0 939658 417837 1144 104 1040 0 176128 104 -500 docker-proxy
[4461333.409410] [ 939707] 0 939707 417709 1456 208 1248 0 167936 0 -500 docker-proxy
[4461333.409413] [ 939723] 0 939723 399276 1144 104 1040 0 159744 104 -500 docker-proxy
[4461333.409415] [ 939778] 0 939778 309557 1022 1022 0 0 122880 0 -998 containerd-shim
[4461333.409418] [ 939798] 96855 939798 1240 520 104 416 0 53248 0 0 bash
[4461333.409421] [ 965224] 96855 965224 701823 1352 1352 0 0 434176 832 0 docker
[4461333.409423] [ 965243] 96855 965243 1216 624 104 520 0 49152 0 0 bash
[4461333.409426] [ 965736] 96855 965736 3191 1248 624 624 0 73728 0 0 bash
[4461333.409430] [3928035] 96855 3928035 2851 832 104 728 0 73728 312 0 bash
[4461333.409433] [2641094] 0 2641094 6757 1980 316 1664 0 98304 0 0 sshd
[4461333.409436] [2641096] 96855 2641096 6960 1946 594 1352 0 102400 0 0 sshd
[4461333.409439] [2641097] 96855 2641097 1393 520 0 520 0 57344 0 0 sftp-server
[4461333.409442] [3648992] 96855 3648992 4519 1872 104 1768 0 86016 208 0 ssh
[4461333.409447] [ 798322] 0 798322 6757 1876 316 1560 0 98304 0 0 sshd
[4461333.409450] [ 798324] 96855 798324 6822 1739 387 1352 0 102400 0 0 sshd
[4461333.409453] [ 798325] 96855 798325 1345 520 0 520 0 49152 0 0 sftp-server
[4461333.409455] [ 857462] 96855 857462 4433 1872 208 1664 0 77824 208 0 ssh
[4461333.409461] [ 923531] 0 923531 6757 1980 420 1560 0 98304 0 0 sshd
[4461333.409464] [ 923534] 96855 923534 6975 1739 491 1248 0 102400 104 0 sshd
[4461333.409466] [ 923535] 96855 923535 1426 416 0 416 0 57344 104 0 sftp-server
[4461333.409473] [1079233] 0 1079233 1090259 2702 2702 0 0 753664 624 -900 snapd
[4461333.409476] [1054086] 96855 1054086 47360136 38658509 38657573 936 0 377032704 8350160 0 syz-manager
[4461333.409481] [2164322] 0 2164322 6757 1876 420 1456 0 98304 0 0 sshd
[4461333.409484] [2164324] 96855 2164324 6822 1633 385 1248 0 102400 0 0 sshd
[4461333.409486] [2164325] 96855 2164325 1345 416 0 416 0 53248 0 0 sftp-server
[4461333.409489] [2277729] 0 2277729 41340 2912 1456 1456 0 204800 0 0 osqueryd
[4461333.409492] [2277732] 0 2277732 230978 7807 6455 1352 0 405504 0 0 osqueryd
[4461333.409496] [2838079] 0 2838079 6756 1980 420 1560 0 94208 0 0 sshd
[4461333.409498] [2838081] 0 2838081 6755 1980 420 1560 0 98304 0 0 sshd
[4461333.409501] [2838085] 96855 2838085 6894 1426 282 1144 0 102400 104 0 sshd
[4461333.409503] [2838086] 96855 2838086 6820 1769 417 1352 0 102400 0 0 sshd
[4461333.409505] [2838089] 96855 2838089 2827 1144 416 728 0 69632 0 0 bash
[4461333.409508] [2838090] 96855 2838090 1376 416 0 416 0 61440 0 0 sftp-server
[4461333.409511] [2838119] 96855 2838119 2972 728 0 728 0 65536 104 0 tmux: client
[4461333.409514] [2920174] 0 2920174 6756 1980 420 1560 0 94208 0 0 sshd
[4461333.409516] [2920176] 96855 2920176 6999 1842 594 1248 0 102400 0 0 sshd
[4461333.409519] [2920177] 96855 2920177 1378 520 0 520 0 57344 0 0 sftp-server
[4461333.409522] [3214952] 96855 3214952 701823 3016 2080 936 0 438272 0 0 docker
[4461333.409525] [3214971] 96855 3214971 1147 416 104 312 0 49152 0 0 bash
[4461333.409530] [ 794255] 0 794255 9949 848 536 312 0 151552 0 0 .vasd
[4461333.409533] [ 794256] 0 794256 9300 832 208 624 0 143360 0 0 .vasd
[4461333.409535] [ 794257] 0 794257 9083 1248 208 1040 0 151552 0 0 .vasd
[4461333.409541] [ 902473] 111 902473 3509 2932 540 2392 0 77824 0 0 ntpd
[4461333.409544] [ 904751] 0 904751 120652 728 0 728 0 155648 0 0 nscd
[4461333.409547] [ 905110] 0 905110 10768 728 104 624 0 86016 0 0 master
[4461333.409550] [ 905112] 113 905112 11372 1248 104 1144 0 90112 0 0 qmgr
[4461333.409553] [ 911780] 0 911780 21592 2392 728 1560 104 159744 0 0 winbindd
[4461333.409556] [ 911784] 0 911784 21427 1637 701 936 0 143360 0 0 wb[LA-SH004-LNX
[4461333.409559] [ 911785] 0 911785 22493 2367 1119 1144 104 151552 0 0 wb[AP]
[4461333.409566] [ 958632] 113 958632 11362 1040 104 936 0 90112 0 0 pickup
[4461333.409572] [ 976725] 0 976725 6670 1248 312 936 0 90112 0 0 cron
[4461333.409575] [ 976729] 0 976729 1835 416 0 416 0 57344 0 0 sh
[4461333.409577] [ 976732] 0 976732 1868 416 0 416 0 57344 0 0 run_cron
[4461333.409580] [ 976773] 0 976773 3144 1144 520 624 0 69632 0 0 hostdelay
[4461333.409584] [ 979421] 0 979421 6670 1352 312 1040 0 90112 0 0 cron
[4461333.409586] [ 979422] 0 979422 1835 416 0 416 0 61440 0 0 sh
[4461333.409589] [ 979424] 0 979424 1868 520 0 520 0 57344 0 0 run_cron
[4461333.409592] [ 979434] 0 979434 3143 1144 520 624 0 69632 0 0 hostdelay
[4461333.409595] [ 979732] 96855 979732 1339294 821595 820555 1040 0 7606272 0 0 qemu-system-x86
[4461333.409598] [ 979799] 96855 979799 1324716 802278 801446 832 0 7467008 0 0 qemu-system-x86
[4461333.409602] [ 979811] 96855 979811 2970 312 312 0 0 61440 0 0 ssh
[4461333.409604] [ 979818] 96855 979818 1321889 800634 800114 520 0 7401472 0 0 qemu-system-x86
[4461333.409607] [ 979839] 96855 979839 1324716 777954 777538 416 0 7237632 0 0 qemu-system-x86
[4461333.409610] [ 979856] 96855 979856 1322403 785464 784736 728 0 7278592 0 0 qemu-system-x86
[4461333.409613] [ 979890] 96855 979890 1310581 741226 740082 1144 0 6811648 0 0 qemu-system-x86
[4461333.409616] [ 979905] 96855 979905 1311095 749941 749005 936 0 6889472 0 0 qemu-system-x86
[4461333.409618] [ 979928] 96855 979928 1310067 724455 723727 728 0 6680576 0 0 qemu-system-x86
[4461333.409621] [ 979937] 96855 979937 2955 520 312 208 0 65536 0 0 ssh
[4461333.409624] [ 979947] 96855 979947 2958 728 312 416 0 65536 0 0 ssh
[4461333.409627] [ 979954] 96855 979954 3039 416 312 104 0 61440 0 0 ssh
[4461333.409629] [ 979963] 96855 979963 1319319 637844 637532 312 0 6074368 0 0 qemu-system-x86
[4461333.409632] [ 979976] 96855 979976 2979 416 416 0 0 65536 0 0 ssh
[4461333.409635] [ 979993] 96855 979993 3029 624 416 208 0 61440 0 0 ssh
[4461333.409638] [ 979998] 96855 979998 2955 312 312 0 0 65536 0 0 ssh
[4461333.409640] [ 980004] 96855 980004 2955 520 312 208 0 69632 0 0 ssh
[4461333.409642] [ 980014] 96855 980014 1287522 475443 474403 1040 0 4644864 0 0 qemu-system-x86
[4461333.409646] [ 980034] 96855 980034 3011 312 312 0 0 69632 0 0 ssh
[4461333.409648] [ 980035] 96855 980035 2842 208 208 0 0 61440 0 0 ssh
[4461333.409651] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=docker-8d6f7821b4fb4ab1ec4b8504a3d658d7f3f4929d7c661ae90c1fd352d01cc6a0.scope,mems_allowed=0-1,global_oom,task_memcg=/system.slice/docker-8d6f7821b4fb4ab1ec4b8504a3d658d7f3f4929d7c661ae90c1fd352d01cc6a0.scope,task=syz-manager,pid=1054086,uid=96855
[4461333.409807] Out of memory: Killed process 1054086 (syz-manager) total-vm:189440544kB, anon-rss:154630292kB, file-rss:3744kB, shmem-rss:0kB, UID:96855 pgtables:368196kB oom_score_adj:0
[4461335.433429] systemd-journald[989]: Under memory pressure, flushing caches.
[4461343.446805] oom_reaper: reaped process 1054086 (syz-manager), now anon-rss:0kB, file-rss:3744kB, shmem-rss:0kB
These three hosts are the same with 192GB mem, no other heavy tasks runs on them.
any idea? possible memleak?
Below is the pprof heap over time:
$ go tool pprof http://localhost:6060/debug/pprof/heap 09:43:01 [9/1143]
Fetching profile over HTTP from http://localhost:6060/debug/pprof/heap
Saved profile in /usr2/jiangenj/pprof/pprof.syz-manager.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz
File: syz-manager
Type: inuse_space
Time: 2025-05-20 09:42:15 CST
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 833.43MB, 86.37% of 964.98MB total
Dropped 133 nodes (cum <= 4.82MB)
Showing top 10 nodes out of 103
flat flat% sum% cum cum%
291.25MB 30.18% 30.18% 291.25MB 30.18% slices.Grow[go.shape.[]uint8,go.shape.uint8]
274.83MB 28.48% 58.66% 274.83MB 28.48% github.com/google/syzkaller/prog.MakeDataArg (inline)
123.27MB 12.77% 71.44% 123.27MB 12.77% github.com/google/syzkaller/prog.(*Target).BuildChoiceTable
35.32MB 3.66% 75.10% 46.82MB 4.85% github.com/google/syzkaller/pkg/symbolizer.read
29.12MB 3.02% 78.11% 29.12MB 3.02% slices.Clone[go.shape.[]uint8,go.shape.uint8]
28.37MB 2.94% 81.05% 28.37MB 2.94% github.com/google/syzkaller/pkg/signal.(*Signal).Merge
13.50MB 1.40% 82.45% 13.50MB 1.40% github.com/google/syzkaller/prog.MakePointerArg (inline)
13MB 1.35% 83.80% 35.02MB 3.63% github.com/google/syzkaller/prog.(*parser).parseArgStruct
12.50MB 1.30% 85.10% 12.50MB 1.30% github.com/google/syzkaller/prog.MakeResultArg
12.27MB 1.27% 86.37% 12.27MB 1.27% bytes.growSlice
Time: 2025-05-20 09:45:29 CST
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 1467.38MB, 90.78% of 1616.46MB total
Dropped 148 nodes (cum <= 8.08MB)
Showing top 10 nodes out of 89
flat flat% sum% cum cum%
378.62MB 23.42% 23.42% 378.62MB 23.42% slices.Grow[go.shape.[]uint8,go.shape.uint8]
232.62MB 14.39% 37.81% 232.62MB 14.39% github.com/google/syzkaller/prog.(*Target).calcDynamicPrio
224.96MB 13.92% 51.73% 224.96MB 13.92% github.com/google/syzkaller/prog.(*Target).calcStaticPriorities
208.27MB 12.88% 64.62% 208.27MB 12.88% github.com/google/syzkaller/prog.MakeDataArg
122.89MB 7.60% 72.22% 122.89MB 7.60% github.com/google/syzkaller/pkg/signal.(*Signal).Merge
119.74MB 7.41% 79.63% 577.33MB 35.72% github.com/google/syzkaller/prog.(*Target).BuildChoiceTable
58.29MB 3.61% 83.23% 58.29MB 3.61% github.com/google/syzkaller/prog.clone
57.54MB 3.56% 86.79% 57.54MB 3.56% github.com/google/syzkaller/pkg/cover.(*Cover).Serialize
35.32MB 2.18% 88.98% 46.82MB 2.90% github.com/google/syzkaller/pkg/symbolizer.read
29.12MB 1.80% 90.78% 29.12MB 1.80% slices.Clone[go.shape.[]uint8,go.shape.uint8]
Time: 2025-05-20 09:58:24 CST
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 1.05GB, 88.99% of 1.18GB total
Dropped 151 nodes (cum <= 0.01GB)
Showing top 10 nodes out of 99
flat flat% sum% cum cum%
0.23GB 19.69% 19.69% 0.23GB 19.69% github.com/google/syzkaller/pkg/signal.(*Signal).Merge
0.20GB 16.87% 36.56% 0.20GB 16.87% slices.Grow[go.shape.[]uint8,go.shape.uint8]
0.16GB 13.79% 50.35% 0.16GB 13.79% github.com/google/syzkaller/prog.MakeDataArg
0.12GB 10.14% 60.49% 0.12GB 10.14% github.com/google/syzkaller/prog.clone
0.11GB 9.74% 70.23% 0.11GB 9.74% github.com/google/syzkaller/prog.(*Target).BuildChoiceTable
0.10GB 8.52% 78.75% 0.10GB 8.52% github.com/google/syzkaller/pkg/cover.(*Cover).Serialize
0.05GB 3.91% 82.66% 0.05GB 3.91% bytes.growSlice
0.03GB 2.92% 85.59% 0.05GB 3.87% github.com/google/syzkaller/pkg/symbolizer.read
0.03GB 2.41% 87.99% 0.03GB 2.41% slices.Clone[go.shape.[]uint8,go.shape.uint8]
0.01GB 0.99% 88.99% 0.06GB 4.77% github.com/google/syzkaller/pkg/rpcserver.PrependExecuting
Time: 2025-05-20 10:04:09 CST
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 1384.19MB, 89.72% of 1542.79MB total
Dropped 196 nodes (cum <= 7.71MB)
Showing top 10 nodes out of 88
flat flat% sum% cum cum%
407.75MB 26.43% 26.43% 407.75MB 26.43% slices.Grow[go.shape.[]uint8,go.shape.uint8]
267.62MB 17.35% 43.78% 267.62MB 17.35% github.com/google/syzkaller/pkg/signal.(*Signal).Merge
151.22MB 9.80% 53.58% 151.22MB 9.80% github.com/google/syzkaller/prog.MakeDataArg
142.98MB 9.27% 62.85% 142.98MB 9.27% github.com/google/syzkaller/prog.clone
117.68MB 7.63% 70.47% 173.25MB 11.23% github.com/google/syzkaller/prog.(*Target).BuildChoiceTable
113.67MB 7.37% 77.84% 113.67MB 7.37% github.com/google/syzkaller/pkg/cover.(*Cover).Serialize
67.78MB 4.39% 82.23% 67.78MB 4.39% bytes.growSlice
51.06MB 3.31% 85.54% 55.57MB 3.60% github.com/google/syzkaller/prog.(*Target).calcStaticPriorities
35.32MB 2.29% 87.83% 46.82MB 3.03% github.com/google/syzkaller/pkg/symbolizer.read
29.12MB 1.89% 89.72% 29.12MB 1.89% slices.Clone[go.shape.[]uint8,go.shape.uint8]
$ go tool pprof -alloc_space http://localhost:6060/debug/pprof/heap
Fetching profile over HTTP from http://localhost:6060/debug/pprof/heap
Saved profile in /usr2/jiangenj/pprof/pprof.syz-manager.alloc_objects.alloc_space.inuse_objects.inuse_space.006.pb.gz
File: syz-manager
Type: alloc_space
Time: 2025-05-20 10:25:33 CST
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 68.09GB, 77.74% of 87.59GB total
Dropped 653 nodes (cum <= 0.44GB)
Showing top 10 nodes out of 131
flat flat% sum% cum cum%
24.19GB 27.61% 27.61% 29.46GB 33.64% compress/flate.NewWriter
6.63GB 7.57% 35.18% 6.63GB 7.57% slices.Grow[go.shape.[]uint8,go.shape.uint8]
6.60GB 7.53% 42.71% 6.60GB 7.53% slices.Clone[go.shape.[]uint8,go.shape.uint8]
6.33GB 7.23% 49.95% 6.33GB 7.23% bytes.growSlice
5.70GB 6.51% 56.45% 5.70GB 6.51% github.com/google/syzkaller/pkg/cover.(*Convert).convertPCs
5.13GB 5.86% 62.31% 5.13GB 5.86% compress/flate.(*compressor).initDeflate (inline)
4.41GB 5.04% 67.35% 4.96GB 5.67% github.com/google/syzkaller/prog.(*Target).calcStaticPriorities
4.21GB 4.80% 72.15% 4.21GB 4.80% github.com/google/syzkaller/prog.(*Target).calcDynamicPrio
2.65GB 3.03% 75.18% 2.66GB 3.03% io.ReadAll
2.24GB 2.56% 77.74% 11.41GB 13.03% github.com/google/syzkaller/prog.(*Target).BuildChoiceTable
(pprof) list compress/flate.NewWriter
Total: 87.59GB
ROUTINE ======================== compress/flate.NewWriter in /syzkaller/.cache/gomod/golang.org/toolchain@v0.0.1-go1.23.7.linux-amd64/src/compress/flate/deflate.go
24.19GB 29.46GB (flat, cum) 33.64% of Total
. . 666:func NewWriter(w io.Writer, level int) (*Writer, error) {
24.19GB 24.19GB 667: var dw Writer
. 5.28GB 668: if err := dw.d.init(w, level); err != nil {
. . 669: return nil, err
. . 670: }
. . 671: return &dw, nil
. . 672:}
. . 673:
ROUTINE ======================== compress/flate.NewWriterDict in /syzkaller/.cache/gomod/golang.org/toolchain@v0.0.1-go1.23.7.linux-amd64/src/compress/flate/deflate.go
0 4.25MB (flat, cum) 0.0047% of Total
. . 680:func NewWriterDict(w io.Writer, level int, dict []byte) (*Writer, error) {
. . 681: dw := &dictWriter{w}
. 4.25MB 682: zw, err := NewWriter(dw, level)
. . 683: if err != nil {
. . 684: return nil, err
. . 685: }
. . 686: zw.d.fillWindow(dict)
. . 687: zw.dict = append(zw.dict, dict...) // duplicate dictionary for Reset method.
Type: alloc_space
Time: 2025-05-20 10:32:10 CST
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 71.29GB, 77.42% of 92.09GB total
Dropped 692 nodes (cum <= 0.46GB)
Showing top 10 nodes out of 133
flat flat% sum% cum cum%
24.19GB 26.27% 26.27% 29.47GB 32.00% compress/flate.NewWriter
7.65GB 8.31% 34.57% 7.65GB 8.31% slices.Grow[go.shape.[]uint8,go.shape.uint8]
7.65GB 8.31% 42.88% 7.65GB 8.31% slices.Clone[go.shape.[]uint8,go.shape.uint8]
6.80GB 7.38% 50.27% 6.80GB 7.38% bytes.growSlice
6.36GB 6.91% 57.18% 6.36GB 6.91% github.com/google/syzkaller/pkg/cover.(*Convert).convertPCs
5.13GB 5.57% 62.75% 5.13GB 5.57% compress/flate.(*compressor).initDeflate (inline)
4.41GB 4.79% 67.54% 4.96GB 5.39% github.com/google/syzkaller/prog.(*Target).calcStaticPriorities
4.21GB 4.57% 72.11% 4.21GB 4.57% github.com/google/syzkaller/prog.(*Target).calcDynamicPrio
2.65GB 2.88% 74.99% 2.66GB 2.88% io.ReadAll
2.24GB 2.43% 77.42% 11.41GB 12.39% github.com/google/syzkaller/prog.(*Target).BuildChoiceTable
Metadata
Metadata
Assignees
Labels
bugBug in the syzkaller project (e.g. a crash or misbehavior).Bug in the syzkaller project (e.g. a crash or misbehavior).