-
Notifications
You must be signed in to change notification settings - Fork 73
Description
I am running the most recent version of the operator (v1.3.0 as of today).
When setting spec.snapshot.enableOnMasterOnly
and all pods are rolling due to some other config change, I do see this in the log stating that the snapshot_cron setting will be disabled:
clearing snapshot cron schedule on replica
However I do never see this:
setting snapshot cron schedule on master
Which I should when the new master is proposed (if i'm not mistaken?) as per: link 1 and link 2
I did also not see either of these in my logs:
response of `SLAVE OF NO ONE` on master is not OK
error running SLAVE OF NO ONE command
So it doesn't seem that there was an obvious redis client error.
When asking the currently as master labeled pod for its snapshot_cron config, it also replies with it being empty:
$ redis-cli
127.0.0.1:6379> config get snapshot*
1) "snapshot_cron"
2) ""
Demo object I use:
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: dragonfly
namespace: xyz
spec:
annotations:
args:
- "--cache_mode"
- "--cluster_mode=emulated"
- "--version_check=false"
- "--proactor_threads=1"
- "--maxmemory=256Mi"
- "--dbfilename=dragonfly"
- "--max_eviction_per_heartbeat=100"
- "--dbnum=16"
- "--eviction_memory_budget_threshold=0.1"
- "--max_segment_to_consider=4"
- "--shard_repl_backlog_len=1"
image: ghcr.io/dragonflydb/dragonfly:v1.34.1
imagePullPolicy: Always
replicas: 3
resources:
limits:
cpu: 1
memory: 320Mi
requests:
cpu: 100m
ephemeral-storage: 1Gi
memory: 320Mi
serviceAccountName: dragonfly
snapshot:
cron: "*/5 * * * *"
dir: s3://bucket/xyz
enableOnMasterOnly: true
topologySpreadConstraints:
- labelSelector:
matchLabels:
app: dragonfly
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
- labelSelector:
matchLabels:
app: dragonfly
maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
As a result snapshotting doesn't seem to work at all for me anymore.
Is there anything that i'm obviously doing wrong here or is this maybe a bug in the implementation?