-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Good day.
We are looking for some help/suggestions on high memory consumption by gluster volumes.
We have deployed our application integrating with OpenSource Gluster server at our customer site and is facing the problem of “high memory consumption by gluster volumes” and due to which the gluster server is going out of memory and requires a restart of ‘glusterd’ service. Post the restart, we again see the processes (glusterfsd) consuming more memory, eventually exhausting all the memory.
Details of our application and use of Distributed file system ‘glusterfs’ cluster in our environment:
We deploy our application on a VM (which is called as Director after installation and configuration) and later extend the capabilities and functionalities to another VM for high availability and to use same data across all the VMs the data is stored on gluster volumes which is eventually mounted and used across all the VMs.
Our application is a multi-threaded that perform parallel read, write operations on the data and most of the times the incoming data (events) will be more and similarly more operations. So, for each Director, we configure 4 gluster volumes for different types of data.
Description of problem:
The customer has configured 12 Directors and for which they have created 56+ Replicated volumes on a 3 node gluster server. (gluster version – glusterfs 11.1).
And all the 56+ volumes were created under single physical disk (2.2 TB), and over a period most of the gluster volume processes (glusterfsd) are consuming around 1GB of memory which is finally leading to exhaustion of total memory usage by all the volume processes and forces to restart glusterd service.
Expected results:
We need your help on how to resolve this issue.
By tuning any gluster volume parameters will it help to address this issue or is there any other solution to contain it ?
Is it ok to have 56+ volumes created under single physical disk or does it have an impact ?
The customer has been facing this issue for last couple of months and looking for some early resolution on this issue.
Therefore, we would like to know if some one could please help in this matter by providing any suggestions to avoid the issue.
If you need any further information that needs to be collected from the customer environment, please let us know.
Find the below environment details captured from the customer:
Memory - 66 GB
CPU - 6 core
$ glusterfs --version
glusterfs 11.1
Repository revision: git://git.gluster.org/glusterfs.git
$ rpm -qa | grep -i glusterfs
glusterfs-client-xlators-11.1-1.el8s.x86_64
glusterfs-11.1-1.el8s.x86_64
glusterfs-selinux-2.0.1-2.el8.noarch
glusterfs-cli-11.1-1.el8s.x86_64
libglusterfs0-11.1-1.el8s.x86_64
glusterfs-server-11.1-1.el8s.x86_64
glusterfs-fuse-11.1-1.el8s.x86_64
$ sudo systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2025-09-11 12:46:22 CEST; 5h 58min ago
Docs: man:glusterd(8)
Process: 4134163 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 4134164 (glusterd)
Tasks: 2583 (limit: 203762)
Memory: 60.2G
CGroup: /system.slice/glusterd.service
Memory usage of each process
PID: 3135 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.43 GB | VSZ: 1.63 GB
PID: 3146 | CMD: glusterfsd | %MEM: 1.1 | RSS: 0.70 GB | VSZ: 1.91 GB
PID: 3155 | CMD: glusterfsd | %MEM: 0.9 | RSS: 0.58 GB | VSZ: 1.78 GB
PID: 3168 | CMD: glusterfsd | %MEM: 1.0 | RSS: 0.65 GB | VSZ: 1.66 GB
PID: 3180 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.41 GB | VSZ: 1.63 GB
PID: 3190 | CMD: glusterfsd | %MEM: 1.1 | RSS: 0.70 GB | VSZ: 1.91 GB
PID: 3200 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.38 GB | VSZ: 1.74 GB
PID: 3214 | CMD: glusterfsd | %MEM: 1.0 | RSS: 0.69 GB | VSZ: 1.66 GB
PID: 3229 | CMD: glusterfsd | %MEM: 0.7 | RSS: 0.47 GB | VSZ: 1.63 GB
PID: 3239 | CMD: glusterfsd | %MEM: 1.2 | RSS: 0.79 GB | VSZ: 1.91 GB
PID: 3250 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.37 GB | VSZ: 1.76 GB
PID: 3262 | CMD: glusterfsd | %MEM: 0.8 | RSS: 0.57 GB | VSZ: 1.64 GB
PID: 3339 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.42 GB | VSZ: 1.78 GB
PID: 3350 | CMD: glusterfsd | %MEM: 0.9 | RSS: 0.62 GB | VSZ: 1.66 GB
PID: 3359 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.41 GB | VSZ: 1.63 GB
PID: 3874 | CMD: glusterfsd | %MEM: 0.8 | RSS: 0.55 GB | VSZ: 1.59 GB
PID: 3884 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.34 GB | VSZ: 1.57 GB
PID: 3894 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.32 GB | VSZ: 1.58 GB
PID: 3906 | CMD: glusterfsd | %MEM: 0.9 | RSS: 0.62 GB | VSZ: 1.59 GB
PID: 3927 | CMD: glusterfsd | %MEM: 0.7 | RSS: 0.45 GB | VSZ: 1.55 GB
PID: 3957 | CMD: glusterfsd | %MEM: 1.0 | RSS: 0.65 GB | VSZ: 1.91 GB
PID: 3967 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.35 GB | VSZ: 1.78 GB
PID: 3977 | CMD: glusterfsd | %MEM: 0.9 | RSS: 0.62 GB | VSZ: 1.66 GB
PID: 3987 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.39 GB | VSZ: 1.63 GB
PID: 4014 | CMD: glusterfsd | %MEM: 1.1 | RSS: 0.70 GB | VSZ: 1.91 GB
PID: 4022 | CMD: glusterfsd | %MEM: 0.4 | RSS: 0.29 GB | VSZ: 1.77 GB
PID: 4033 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.33 GB | VSZ: 1.63 GB
PID: 4048 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.33 GB | VSZ: 1.63 GB
PID: 4072 | CMD: glusterfsd | %MEM: 0.9 | RSS: 0.59 GB | VSZ: 1.91 GB
PID: 4082 | CMD: glusterfsd | %MEM: 0.4 | RSS: 0.28 GB | VSZ: 1.76 GB
PID: 4092 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.32 GB | VSZ: 1.63 GB
PID: 4104 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.32 GB | VSZ: 1.63 GB
PID: 4127 | CMD: glusterfsd | %MEM: 0.9 | RSS: 0.58 GB | VSZ: 1.91 GB
PID: 4141 | CMD: glusterfsd | %MEM: 0.4 | RSS: 0.28 GB | VSZ: 1.75 GB
PID: 4158 | CMD: glusterfsd | %MEM: 1.1 | RSS: 0.71 GB | VSZ: 1.66 GB
PID: 4171 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.39 GB | VSZ: 1.63 GB
PID: 4196 | CMD: glusterfsd | %MEM: 1.1 | RSS: 0.69 GB | VSZ: 1.91 GB
PID: 4208 | CMD: glusterfsd | %MEM: 0.4 | RSS: 0.30 GB | VSZ: 1.77 GB
PID: 4216 | CMD: glusterfsd | %MEM: 1.1 | RSS: 0.75 GB | VSZ: 1.65 GB
PID: 4226 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.41 GB | VSZ: 1.63 GB
PID: 4241 | CMD: glusterfsd | %MEM: 1.0 | RSS: 0.66 GB | VSZ: 1.91 GB
PID: 4253 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.37 GB | VSZ: 1.77 GB
PID: 4261 | CMD: glusterfsd | %MEM: 1.2 | RSS: 0.82 GB | VSZ: 1.66 GB
PID: 4271 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.44 GB | VSZ: 1.63 GB
PID: 4295 | CMD: glusterfsd | %MEM: 0.8 | RSS: 0.57 GB | VSZ: 1.91 GB
PID: 4306 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.41 GB | VSZ: 1.78 GB
PID: 4315 | CMD: glusterfsd | %MEM: 1.0 | RSS: 0.66 GB | VSZ: 1.66 GB
PID: 4374 | CMD: glusterfsd | %MEM: 1.2 | RSS: 0.80 GB | VSZ: 1.60 GB
PID: 794130 | CMD: glusterfsd | %MEM: 0.6 | RSS: 0.40 GB | VSZ: 1.36 GB
PID: 796861 | CMD: glusterfsd | %MEM: 1.1 | RSS: 0.73 GB | VSZ: 1.64 GB
PID: 796937 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.37 GB | VSZ: 1.52 GB
PID: 797140 | CMD: glusterfsd | %MEM: 1.0 | RSS: 0.64 GB | VSZ: 1.39 GB
PID: 797153 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.36 GB | VSZ: 1.36 GB
PID: 797178 | CMD: glusterfsd | %MEM: 1.1 | RSS: 0.70 GB | VSZ: 1.64 GB
PID: 1511563 | CMD: glusterfsd | %MEM: 0.5 | RSS: 0.33 GB | VSZ: 1.13 GB
PID: 2802486 | CMD: glusterfsd | %MEM: 0.0 | RSS: 0.02 GB | VSZ: 0.27 GB
PID: 3026960 | CMD: glusterd | %MEM: 0.1 | RSS: 0.09 GB | VSZ: 0.27 GB
PID: 3027266 | CMD: glusterfs | %MEM: 11.2 | RSS: 7.09 GB | VSZ: 17.51 GB
Parameters for one of the volume:
Volumen: xxxx_D
Option Value
cluster.lookup-unhashed on (DEFAULT)
cluster.lookup-optimize on (DEFAULT)
cluster.rmdir-optimize on (DEFAULT)
cluster.min-free-disk 10% (DEFAULT)
cluster.min-free-inodes 5% (DEFAULT)
cluster.rebalance-stats off (DEFAULT)
cluster.subvols-per-directory (null) (DEFAULT)
cluster.readdir-optimize off (DEFAULT)
cluster.rsync-hash-regex (null) (DEFAULT)
cluster.extra-hash-regex (null) (DEFAULT)
cluster.dht-xattr-name trusted.glusterfs.dht (DEFAULT)
cluster.randomize-hash-range-by-gfid off (DEFAULT)
cluster.rebal-throttle normal (DEFAULT)
cluster.lock-migration off
cluster.force-migration off
cluster.local-volume-name (null) (DEFAULT)
cluster.weighted-rebalance on (DEFAULT)
cluster.switch-pattern (null) (DEFAULT)
cluster.entry-change-log on (DEFAULT)
cluster.read-subvolume (null) (DEFAULT)
cluster.read-subvolume-index -1 (DEFAULT)
cluster.read-hash-mode 1 (DEFAULT)
cluster.background-self-heal-count 8 (DEFAULT)
cluster.metadata-self-heal off (DEFAULT)
cluster.data-self-heal off (DEFAULT)
cluster.entry-self-heal off (DEFAULT)
cluster.self-heal-daemon enable
cluster.heal-timeout 600 (DEFAULT)
cluster.self-heal-window-size 8 (DEFAULT)
cluster.data-change-log on (DEFAULT)
cluster.metadata-change-log on (DEFAULT)
cluster.data-self-heal-algorithm full
cluster.eager-lock off
disperse.eager-lock on (DEFAULT)
disperse.other-eager-lock on (DEFAULT)
disperse.eager-lock-timeout 1 (DEFAULT)
disperse.other-eager-lock-timeout 1 (DEFAULT)
cluster.quorum-type auto
cluster.quorum-count (null) (DEFAULT)
cluster.choose-local true (DEFAULT)
cluster.self-heal-readdir-size 1KB (DEFAULT)
cluster.post-op-delay-secs 1 (DEFAULT)
cluster.ensure-durability on (DEFAULT)
cluster.consistent-metadata yes
cluster.heal-wait-queue-length 128 (DEFAULT)
cluster.favorite-child-policy none (DEFAULT)
cluster.full-lock yes (DEFAULT)
cluster.optimistic-change-log on (DEFAULT)
diagnostics.latency-measurement off
diagnostics.dump-fd-stats off (DEFAULT)
diagnostics.count-fop-hits off
diagnostics.brick-log-level INFO
diagnostics.client-log-level INFO
diagnostics.brick-sys-log-level CRITICAL (DEFAULT)
diagnostics.client-sys-log-level CRITICAL (DEFAULT)
diagnostics.brick-logger (null) (DEFAULT)
diagnostics.client-logger (null) (DEFAULT)
diagnostics.brick-log-format (null) (DEFAULT)
diagnostics.client-log-format (null) (DEFAULT)
diagnostics.brick-log-buf-size 5 (DEFAULT)
diagnostics.client-log-buf-size 5 (DEFAULT)
diagnostics.brick-log-flush-timeout 120 (DEFAULT)
diagnostics.client-log-flush-timeout 120 (DEFAULT)
diagnostics.stats-dump-interval 0 (DEFAULT)
diagnostics.fop-sample-interval 0 (DEFAULT)
diagnostics.stats-dump-format json (DEFAULT)
diagnostics.fop-sample-buf-size 65535 (DEFAULT)
diagnostics.stats-dnscache-ttl-sec 86400 (DEFAULT)
performance.cache-max-file-size 0 (DEFAULT)
performance.cache-min-file-size 0 (DEFAULT)
performance.cache-refresh-timeout 1 (DEFAULT)
performance.cache-priority (DEFAULT)
performance.io-cache-size 32MB (DEFAULT)
performance.cache-size 32MB (DEFAULT)
performance.io-thread-count 16 (DEFAULT)
performance.high-prio-threads 16 (DEFAULT)
performance.normal-prio-threads 16 (DEFAULT)
performance.low-prio-threads 16 (DEFAULT)
performance.least-prio-threads 1 (DEFAULT)
performance.enable-least-priority on (DEFAULT)
performance.iot-watchdog-secs (null) (DEFAULT)
performance.iot-cleanup-disconnected-reqs off (DEFAULT)
performance.iot-pass-through false (DEFAULT)
performance.io-cache-pass-through false (DEFAULT)
performance.quick-read-cache-size 128MB (DEFAULT)
performance.cache-size 128MB (DEFAULT)
performance.quick-read-cache-timeout 1 (DEFAULT)
performance.qr-cache-timeout 1 (DEFAULT)
performance.quick-read-cache-invalidation false (DEFAULT)
performance.ctime-invalidation false (DEFAULT)
performance.flush-behind off
performance.nfs.flush-behind on (DEFAULT)
performance.write-behind-window-size 1MB (DEFAULT)
performance.resync-failed-syncs-after-fsync off (DEFAULT)
performance.nfs.write-behind-window-size 1MB (DEFAULT)
performance.strict-o-direct on
performance.nfs.strict-o-direct off (DEFAULT)
performance.strict-write-ordering off (DEFAULT)
performance.nfs.strict-write-ordering off (DEFAULT)
performance.write-behind-trickling-writes on (DEFAULT)
performance.aggregate-size 128KB (DEFAULT)
performance.nfs.write-behind-trickling-writes on (DEFAULT)
performance.lazy-open no
performance.read-after-open yes (DEFAULT)
performance.open-behind-pass-through false (DEFAULT)
performance.read-ahead-page-count 4 (DEFAULT)
performance.read-ahead-pass-through false (DEFAULT)
performance.readdir-ahead-pass-through false (DEFAULT)
performance.md-cache-pass-through false (DEFAULT)
performance.write-behind-pass-through false (DEFAULT)
performance.md-cache-timeout 1 (DEFAULT)
performance.cache-swift-metadata false (DEFAULT)
performance.cache-samba-metadata false (DEFAULT)
performance.cache-capability-xattrs true (DEFAULT)
performance.cache-ima-xattrs true (DEFAULT)
performance.md-cache-statfs off (DEFAULT)
performance.xattr-cache-list (DEFAULT)
performance.nl-cache-pass-through false (DEFAULT)
network.frame-timeout 180
network.ping-timeout 42 (DEFAULT)
network.tcp-window-size 1048576
client.ssl off
network.remote-dio disable (DEFAULT)
client.event-threads 24
client.tcp-user-timeout 0
client.keepalive-time 20
client.keepalive-interval 2
client.keepalive-count 9
client.strict-locks off
network.tcp-window-size 1048576
network.inode-lru-limit 16384 (DEFAULT)
auth.allow *
auth.reject (null) (DEFAULT)
transport.keepalive 1
server.allow-insecure on (DEFAULT)
server.root-squash off (DEFAULT)
server.all-squash off (DEFAULT)
server.anonuid 65534 (DEFAULT)
server.anongid 65534 (DEFAULT)
server.statedump-path /var/run/gluster (DEFAULT)
server.outstanding-rpc-limit 64 (DEFAULT)
server.ssl off
auth.ssl-allow *
server.manage-gids off (DEFAULT)
server.dynamic-auth on (DEFAULT)
client.send-gids on (DEFAULT)
server.gid-timeout 300 (DEFAULT)
server.own-thread (null) (DEFAULT)
server.event-threads 24
server.tcp-user-timeout 42
server.keepalive-time 20
server.keepalive-interval 2
server.keepalive-count 9
transport.listen-backlog 1024
ssl.own-cert (null) (DEFAULT)
ssl.private-key (null) (DEFAULT)
ssl.ca-list (null) (DEFAULT)
ssl.crl-path (null) (DEFAULT)
ssl.certificate-depth (null) (DEFAULT)
ssl.cipher-list (null) (DEFAULT)
ssl.dh-param (null) (DEFAULT)
ssl.ec-curve (null) (DEFAULT)
transport.address-family inet
performance.write-behind off
performance.read-ahead off
performance.readdir-ahead off
performance.io-cache off
performance.open-behind off
performance.quick-read off
performance.nl-cache off
performance.stat-prefetch off
performance.client-io-threads off
performance.nfs.write-behind on
performance.nfs.read-ahead off
performance.nfs.io-cache off
performance.nfs.quick-read off
performance.nfs.stat-prefetch off
performance.nfs.io-threads off
performance.force-readdirp true (DEFAULT)
performance.cache-invalidation true
performance.global-cache-invalidation true
features.uss off
features.snapshot-directory .snaps
features.show-snapshot-directory off
features.tag-namespaces off
network.compression off
network.compression.window-size -15 (DEFAULT)
network.compression.mem-level 8 (DEFAULT)
network.compression.min-size 1024 (DEFAULT)
network.compression.compression-level 1 (DEFAULT)
network.compression.debug false (DEFAULT)
features.default-soft-limit 80% (DEFAULT)
features.soft-timeout 60 (DEFAULT)
features.hard-timeout 5 (DEFAULT)
features.alert-time 86400 (DEFAULT)
features.quota-deem-statfs off
geo-replication.indexing off
geo-replication.indexing off
geo-replication.ignore-pid-check off
geo-replication.ignore-pid-check off
features.quota off
features.inode-quota off
features.bitrot disable
debug.trace off
debug.log-history no (DEFAULT)
debug.log-file no (DEFAULT)
debug.exclude-ops (null) (DEFAULT)
debug.include-ops (null) (DEFAULT)
debug.error-gen off
debug.error-failure (null) (DEFAULT)
debug.error-number (null) (DEFAULT)
debug.random-failure off (DEFAULT)
debug.error-fops (null) (DEFAULT)
features.read-only off (DEFAULT)
features.worm off
features.worm-file-level off
features.worm-files-deletable on
features.default-retention-period 120 (DEFAULT)
features.retention-mode relax (DEFAULT)
features.auto-commit-period 180 (DEFAULT)
storage.linux-aio off (DEFAULT)
storage.linux-io_uring off (DEFAULT)
storage.batch-fsync-mode reverse-fsync (DEFAULT)
storage.batch-fsync-delay-usec 0 (DEFAULT)
storage.owner-uid -1 (DEFAULT)
storage.owner-gid -1 (DEFAULT)
storage.node-uuid-pathinfo off (DEFAULT)
storage.health-check-interval 30 (DEFAULT)
storage.build-pgfid off (DEFAULT)
storage.gfid2path on (DEFAULT)
storage.gfid2path-separator : (DEFAULT)
storage.reserve 1 (DEFAULT)
storage.health-check-timeout 20 (DEFAULT)
storage.fips-mode-rchecksum on
storage.force-create-mode 0000 (DEFAULT)
storage.force-directory-mode 0000 (DEFAULT)
storage.create-mask 0777 (DEFAULT)
storage.create-directory-mask 0777 (DEFAULT)
storage.max-hardlinks 100 (DEFAULT)
features.ctime on (DEFAULT)
config.gfproxyd off
cluster.server-quorum-type server
cluster.server-quorum-ratio 51
changelog.changelog off (DEFAULT)
changelog.changelog-dir {{ brick.path }}/.glusterfs/changelogs (DEFAULT)
changelog.encoding ascii (DEFAULT)
changelog.rollover-time 15 (DEFAULT)
changelog.fsync-interval 5 (DEFAULT)
changelog.changelog-barrier-timeout 120
changelog.capture-del-path off (DEFAULT)
features.barrier disable
features.barrier-timeout 120
features.trash off (DEFAULT)
features.trash-dir .trashcan (DEFAULT)
features.trash-eliminate-path (null) (DEFAULT)
features.trash-max-filesize 5MB (DEFAULT)
features.trash-internal-op off (DEFAULT)
cluster.enable-shared-storage disable
locks.trace off (DEFAULT)
locks.mandatory-locking off (DEFAULT)
cluster.disperse-self-heal-daemon enable (DEFAULT)
cluster.quorum-reads no (DEFAULT)
client.bind-insecure (null) (DEFAULT)
features.shard off
features.shard-block-size 64MB (DEFAULT)
features.shard-lru-limit 16384 (DEFAULT)
features.shard-deletion-rate 100 (DEFAULT)
features.scrub-throttle lazy
features.scrub-freq biweekly
features.scrub false (DEFAULT)
features.expiry-time 120
features.signer-threads 4
features.cache-invalidation on
features.cache-invalidation-timeout 60 (DEFAULT)
ganesha.enable off
features.leases off
features.lease-lock-recall-timeout 60 (DEFAULT)
disperse.background-heals 8 (DEFAULT)
disperse.heal-wait-qlength 128 (DEFAULT)
cluster.heal-timeout 600 (DEFAULT)
dht.force-readdirp on (DEFAULT)
disperse.read-policy gfid-hash (DEFAULT)
cluster.shd-max-threads 1 (DEFAULT)
cluster.shd-wait-qlength 1024 (DEFAULT)
cluster.locking-scheme full (DEFAULT)
cluster.granular-entry-heal on
features.locks-revocation-secs 0 (DEFAULT)
features.locks-revocation-clear-all false (DEFAULT)
features.locks-revocation-max-blocked 0 (DEFAULT)
features.locks-monkey-unlocking false (DEFAULT)
features.locks-notify-contention yes (DEFAULT)
features.locks-notify-contention-delay 5 (DEFAULT)
disperse.shd-max-threads 1 (DEFAULT)
disperse.shd-wait-qlength 1024 (DEFAULT)
disperse.cpu-extensions auto (DEFAULT)
disperse.self-heal-window-size 32 (DEFAULT)
cluster.use-compound-fops off
performance.parallel-readdir on
performance.rda-request-size 131072
performance.rda-low-wmark 4096 (DEFAULT)
performance.rda-high-wmark 128KB (DEFAULT)
performance.rda-cache-limit 10MB
performance.nl-cache-positive-entry false (DEFAULT)
performance.nl-cache-limit 10MB
performance.nl-cache-timeout 60 (DEFAULT)
cluster.brick-multiplex disable
cluster.brick-graceful-cleanup disable
glusterd.vol_count_per_thread 100
cluster.max-bricks-per-process 250
disperse.optimistic-change-log on (DEFAULT)
disperse.stripe-cache 4 (DEFAULT)
cluster.halo-enabled False (DEFAULT)
cluster.halo-shd-max-latency 99999 (DEFAULT)
cluster.halo-nfsd-max-latency 5 (DEFAULT)
cluster.halo-max-latency 5 (DEFAULT)
cluster.halo-max-replicas 99999 (DEFAULT)
cluster.halo-min-replicas 2 (DEFAULT)
features.selinux on
cluster.daemon-log-level INFO
debug.delay-gen off
delay-gen.delay-percentage 10% (DEFAULT)
delay-gen.delay-duration 100000 (DEFAULT)
delay-gen.enable (DEFAULT)
disperse.parallel-writes on (DEFAULT)
disperse.quorum-count 0 (DEFAULT)
features.sdfs off
features.cloudsync off
features.ctime on
ctime.noatime on
features.cloudsync-storetype (null) (DEFAULT)
features.enforce-mandatory-lock off
config.global-threading off
config.client-threads 16
config.brick-threads 16
features.cloudsync-remote-read off
features.cloudsync-store-id (null) (DEFAULT)
features.cloudsync-product-id (null) (DEFAULT)
features.acl enable
feature.simple-quota-pass-through true
feature.simple-quota.use-backend false
cluster.use-anonymous-inode yes
rebalance.ensure-durability on (DEFAULT)
- The operating system / glusterfs version:
RHEL 8.10, glusterfs 11.1