Open
Description
Checklist:
- I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
- I've included steps to reproduce the bug.
- I've pasted the output of
argocd version
.
Describe the bug
When using:
lifecycle:
preStop:
sleep:
seconds: 120
With ArgoCD and a helm chart as a source, ArgoCD will continually override the deployment forcing it to scale to 1 replica, regardless of HPA configuration.
To Reproduce
-
Create a basic helm chart with
helm init
and turnautoscaling.enabled: true
andautosclaing.minReplicas: 2
apply the application -
Update the chart to use
lifecycle:
preStop:
sleep:
seconds: 120
wait for the application to sync
Note the deployment has been scaled down to 1 pod (despite the HPA minReplicas), it will repeatedly try to scale up but new pods will be deleted.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sleep-test
namespace: argocd
spec:
project: default
destination:
server: https://kubernetes.default.svc
namespace: default
source:
chart: sleep-test
helm:
values: |
autoscaling:
enabled: true
minReplicas: 2
version: v3
repoURL: repo-for-the-chart
targetRevision: 0.1.0
syncPolicy:
automated:
selfHeal: true
syncOptions:
- Replace=true
- Force=true
- Prune=true
Expected behavior
This lifecycle setting should not interfere with scaling.
Screenshots
argocd-server: v2.14.7+d107d4e
BuildDate: 2025-03-19T19:56:45Z
GitCommit: d107d4e41a9b8fa792ed6955beca43d2642bad26
GitTreeState: clean
GoVersion: go1.24.1
Compiler: gc
Platform: darwin/arm64
Kustomize Version: v5.6.0 2025-01-14T15:08:34Z
Helm Version: v3.17.1+g980d8ac
Kubectl Version: v0.31.0
Jsonnet Version: v0.20.0