Open
Description
I have observed that for workloads where large number of files are being exercised (Ex: nrfiles=10000) with limited openfiles at any point of time (openfiles=16), FIO failing with error "Too many open files". With the following job file, i would expect that FIO will keep open only 16 files at any point of time. When i observed lsof for fio, i see that after sometime the open files count increases drastically and fails.
# while true; do lsof | grep fio | wc -l; sleep 1; done
145
146
145
146
145
145
145
145
146
145
145
145
133
132
132
389
1110
With increased ulimit for open files: it touches to >5000 at times (And sometimes even >20000)
146
146
131
709
1407
2084
2731
3425
4100
4791
5459
146
146
Job File:
[global]
ioengine=libaio
direct=0
size=14G
fsync=8
fsync_on_close=1
end_fsync=1
create_serialize=1
refill_buffers=1
verify=md5
numjobs=1
readwrite=rw
iodepth=16
bs_unaligned=1
time_based=1
create_on_open=1
filesize=8k-512k
do_verify=1
openfiles=16
bsrange=1k-8k
runtime=1800
nrfiles=10000
[job1]
directory=/root/fileio
Just wanted to check if there is any issue with the job profile or would it be some issue with FIO while open/close of files.
FIO Version:
fio-3.3-31-gca65
on CentOS7.4.