Skip to content

virtiofs/PassthroughFs can open too many file descriptors #505

@valpackett

Description

@valpackett

Currently, just running find over a massive tree (nix store in this case) in the guest can "brick" the virtiofs:

$ find /nix/store -name gdb
/nix/store/07agij5dq39pdgv6xla1naa6ilc7hwqd-python3.13-numba-0.62.0rc1/lib/python3.13/site-packages/numba/tests/gdb
[..]
/nix/store/rdq0ddh8l47548n08274r2vaw2mh0jjv-source/pkgs/development/tools/misc/gdb
find: ‘/nix/store/wn3wblni1cm7plb1dvakqcmr31szqdgn-source/pkgs/by-name/mf’: Too many open files
find: ‘/nix/store/wn3wblni1cm7plb1dvakqcmr31szqdgn-source/pkgs/by-name/mg’: Too many open files
find: ‘/nix/store/wn3wblni1cm7plb1dvakqcmr31szqdgn-source/pkgs/by-name/mh’: Too many open files
[..]
$ htop
htop: error while loading shared libraries: libncursesw.so.6: cannot close file descriptor: Error 24

the FDs in the host VMM process pile up until reaching the limit:

❯ ls /proc/16008/fdinfo | wc -l
524282

these are all O_PATH ones:

❯ cat /proc/16008/fdinfo/508954
pos:    0
flags:  012100000
mnt_id: 1529
ino:    2918325
❯ rg flags /proc/16008/fdinfo/508930
2: flags:  012100000
❯ rg /proc/16008/fdinfo/508931
2: flags:  012100000
❯ python
>>> 0o12100000 & 0o10000000
2097152

So we need to put some kind of limit on the size of PassthroughFs#inodes map, automatically forgetting some unopened inodes when it reaches some threshold…

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions