Skip to content

Conversation

@theTibi
Copy link

@theTibi theTibi commented Jan 9, 2026

  • Introduced maxReplicas parameter in values.yaml to set the maximum number of PMM replicas for HAProxy server-template.
  • Updated haproxy-configmap.yaml to utilize server-template for dynamic DNS-based discovery, allowing automatic pod discovery during scaling without requiring HAProxy restarts.
  • Enhanced comments for clarity on the configuration and behavior of HAProxy with respect to PMM pods.

- Introduced maxReplicas parameter in values.yaml to set the maximum number of PMM replicas for HAProxy server-template.
- Updated haproxy-configmap.yaml to utilize server-template for dynamic DNS-based discovery, allowing automatic pod discovery during scaling without requiring HAProxy restarts.
- Enhanced comments for clarity on the configuration and behavior of HAProxy with respect to PMM pods.
@theTibi theTibi requested a review from a team as a code owner January 9, 2026 14:23
@theTibi theTibi requested review from JiriCtvrtka and idoqo and removed request for a team January 9, 2026 14:23
- Documented the requirement for the Raft leader to be on pmm-0 when scaling down to a single PMM replica.
- Included instructions on the scaling process to prevent PMM from becoming unreachable during scaling operations.
# Use server-template for dynamic DNS-based discovery of PMM pods
# This automatically discovers pods when scaling up/down without requiring HAProxy restart
# HAProxy will re-resolve DNS based on 'hold valid' setting in the resolver (10s)
server-template pmm 1-{{ $.Values.maxReplicas | default 10 }} {{ $.Values.service.name | default "monitoring-service" }}.{{ $.Release.Namespace }}.svc.cluster.local:8443 check ssl verify none resolvers k8s init-addr last,libc,none
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Don't we have to use {{ $.Release.Name }} instead of pmm in here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are not pod names this is just a prefix for Haproxy slots. Haproxy will keep 10 slots, then queries DNS for the headless service (monitoring-service.namespace.svc.cluster.local) and will use those IPs of the pods and assign them to a slot. The actual pod name does not matter for haproxy. if we want we can use something like this:
server-template {{ $.Release.Name }}- 1-{{ $.Values.maxReplicas | default 10 }} {{ $.Values.service.name | default "monitoring-service" }}.{{ $.Release.Namespace }}.svc.cluster.local:8443 check ssl verify none resolvers k8s init-addr last,libc,none

That will make hapoxy stats maybe more clearer but again these names will not correspond with the real pods, these are just reserved slots and haproxy will assign the pods randomly to the slots..

@theTibi theTibi merged commit dcab453 into percona:pmmha-v3 Jan 14, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants