Skip to content

fix: Ensure pod template annotations are rendered for provider deployments#20

Merged
nimrod-teich merged 2 commits intolavanet:mainfrom
mostwanted7:main
Jan 15, 2026
Merged

fix: Ensure pod template annotations are rendered for provider deployments#20
nimrod-teich merged 2 commits intolavanet:mainfrom
mostwanted7:main

Conversation

@mostwanted7
Copy link
Copy Markdown
Contributor

@mostwanted7 mostwanted7 commented Dec 16, 2025

Summary

Fixes an issue in ArgoCD-managed deployments where kubectl rollout restart would create a new pod that was immediately terminated instead of performing a proper rolling restart.

Root cause

The chart did not consistently render spec.template.metadata.annotations. When no pod annotations were configured, the pod template effectively had no annotations map. During a rollout restart, kubectl temporarily injects kubectl.kubernetes.io/restartedAt, but in ArgoCD-managed deployments this change was later overwritten when the Deployment was reconciled from the rendered manifest. This caused the pod template hash to revert and the newly created ReplicaSet to be scaled down immediately.

Fix

Ensure the pod template annotations block is rendered when pod annotations are provided. This stabilizes the pod template metadata and prevents the restart annotation from being removed during reconciliation, allowing Kubernetes to complete a normal rolling update.

Result

kubectl rollout restart now behaves as expected in ArgoCD-managed deployments, replacing the old pod with a new one via a proper rolling deployment.

@mostwanted7 mostwanted7 changed the title Ensure pod template annotations are rendered for provider deployments fix: Ensure pod template annotations are rendered for provider deployments Dec 16, 2025
@nimrod-teich nimrod-teich merged commit f47d5f8 into lavanet:main Jan 15, 2026
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants