[CI] Attach pod disruption budgets to runner pods#523
Merged
boomanaiden154 merged 4 commits intollvm:mainfrom Jul 24, 2025
Merged
[CI] Attach pod disruption budgets to runner pods#523boomanaiden154 merged 4 commits intollvm:mainfrom
boomanaiden154 merged 4 commits intollvm:mainfrom
Conversation
This patch adds some pod disruption budgets to runner pods that just sets the minimum number of available pods to the maximum. This ensure that the number of pods that k8s calculates can be disrupted is zero. This means that when GKE is updating the node pool, it must wait an hour before forcibly evicting the pod, giving it time to finish. Before this, when GKE wanted to upgrade a node, it would forcibly evict the pod very quickly (theoretically after the grace period which has a default of 30s) not realizing it is stateful.
cmtice
reviewed
Jul 24, 2025
| @@ -0,0 +1,10 @@ | |||
| apiVersion: policy/v1 | |||
Contributor
There was a problem hiding this comment.
I would prefer a more descriptive file name than "pdb.yaml".
Contributor
Author
There was a problem hiding this comment.
Updated to pod-disruption-budget.yaml.
cmtice
reviewed
Jul 24, 2025
cmtice
reviewed
Jul 24, 2025
cmtice
reviewed
Jul 24, 2025
tatus
Outdated
| @@ -0,0 +1,59 @@ | |||
| [1mdiff --git a/premerge/pdb.yaml b/premerge/pdb.yaml[m | |||
Contributor
There was a problem hiding this comment.
Is there a typo in this file name ("tatus")?
Contributor
Author
There was a problem hiding this comment.
The file shouldn't exist. Looks like I was typing in the wrong spot when editing the commit message in vim and some version of the commit ended up in a file...
cmtice
approved these changes
Jul 24, 2025
vvereschaka
pushed a commit
to vvereschaka/llvm-zorg
that referenced
this pull request
Sep 25, 2025
This patch adds some pod disruption budgets to runner pods that just sets the minimum number of available pods to the maximum. This ensure that the number of pods that k8s calculates can be disrupted is zero. This means that when GKE is updating the node pool, it must wait an hour before forcibly evicting the pod, giving it time to finish. Before this, when GKE wanted to upgrade a node, it would forcibly evict the pod very quickly (theoretically after the grace period which has a default of 30s) not realizing it is stateful.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This patch adds some pod disruption budgets to runner pods that just sets the minimum number of available pods to the maximum. This ensure that the number of pods that k8s calculates can be disrupted is zero. This means that when GKE is updating the node pool, it must wait an hour before forcibly evicting the pod, giving it time to finish. Before this, when GKE wanted to upgrade a node, it would forcibly evict the pod very quickly (theoretically after the grace period which has a default of 30s) not realizing it is stateful.