-
Notifications
You must be signed in to change notification settings - Fork 65
Description
Hi,
I have seaweedfs-csi-driver configured with seaweedfs on kubernetes with default seaweedfs values outside of replicas and volumes. 3 masters with 001 replication configured, 3 filers, 4 volume replicas. We've also increased the CSI driver controller to 3 replicas to avoid a SPOF. We have 8 application pods running a total of ~100 ffmpeg processes streaming live hls content. For each process a new .ts file written with the stream data every 2 seconds, and a master.m3u8 file is updated every 2 seconds. For every new .ts data file that is written, one is deleted. ( a constant stream of changing data ).
When accessing the master.hls file we're finding that it is randomly empty with 0 bytes, and some of the .ts files also 0 bytes and others fine. For example:
root@xxxxx-ingest-dc778486b-hbxdc:/var/www/html# ls -l /mnt/hls/etyetihk/1720479791/
total 3268
-rw-r--r-- 1 sail sail 0 Jul 13 23:21 stream0_101573.ts
-rw-r--r-- 1 sail sail 0 Jul 13 23:21 stream0_101574.ts
-rw-r--r-- 1 sail sail 0 Jul 13 23:21 stream0_101575.ts
-rw-r--r-- 1 sail sail 835284 Jul 13 23:21 stream0_101576.ts
-rw-r--r-- 1 sail sail 831524 Jul 13 23:21 stream0_101577.ts
-rw-r--r-- 1 sail sail 833216 Jul 13 23:22 stream0_101578.ts
-rw-r--r-- 1 sail sail 844684 Jul 13 23:22 stream0_101579.ts
Sometimes every file is zero bytes, sometimes none are, sometimes only some files are. It's not consistent. I'm also finding some larger once-off writes for entire mp4 files at ~1-5GB are empty.
Log files on all pods look normal, no visible errors that i could see.
We're migrating over from nfs-ganesha-server-and-external-provisioner due to it being a SPOF, the previous solution was fine without issue. The only change is using seaweedfs instead.
We tried doubling filer replicas, and even decreasing down to 1; to no avail.
I'm wondering if it could have something to do with concurrentWriters default of 32?
Any thoughts as to where to look to solve this?