Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: Improved Cluster State Test Suite duration from 727 seconds to 71 seconds #2060

Merged

Conversation

rschalo
Copy link
Contributor

@rschalo rschalo commented Mar 7, 2025

Fixes #N/A

Description
Recently, the cluster state tests have been taking an increasing amount of time (see below for a run which took 727 seconds). I generated a test report with Ginkgo (see below for commands) that identified the slowest tests in the suite and found that Cluster State Sync tests were running the longest.

Investigating the ExpectApplied function showed that running sequentially takes a long time because each function call makes 5 calls to the apiserver:

  • fetch the object
  • create the object if not found
  • update the object resource version after create
  • update the object status resource version
  • fetch the object now that it has been updated

Running these requests in parallel with a WaitGroup dramatically increases the speed of the tests. It initially encountered issues with client-side throttling, which is not constrained by API Priority & Fairness. To get around this, I also added a helper function that allows custom rest.Config settings when constructing the environment and then set the client to not be rate-limited.

One thing that may need to change is that in the previous test, we were checking ExpectMetricGaugeValue(state.ClusterStateNodesCount, float64(i+1), nil) after every create whereas now this metric is checked against the expected count after creating all objects. A better way to track this could be another goroutine which polls this metric and validates that it is increasing until the test reaches the expected end result. Happy to add in this PR or in a subsequent one.

How was this change tested?
When evaluating effectiveness, compare recent CI runs for 1.31.x and 1.32.x clusters compared to the CI tests in this PR. Additionally, see below for the local comparison.

Before

❯ ginkgo --json-report=report.json ./pkg/controllers/state/...
Running Suite: Controllers/State - /Users/schalor/github.com/testing/karpenter/pkg/controllers/state
====================================================================================================
Random Seed: 1741357499

Will run 58 of 58 specs
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

Ran 58 of 58 Specs in 727.134 seconds
SUCCESS! -- 58 Passed | 0 Failed | 0 Pending | 0 Skipped
PASS

Ginkgo ran 1 suite in 12m18.298690333s
Test Suite Passed
❯ cat report.json | jq -r '
    [
        .[].SpecReports.[]?
        | select( .State | match("passed") )
        | select( .LeafNodeType | match("It"))
        | {
            Context: ( .ContainerHierarchyTexts | join("/")),
            Test: .LeafNodeText,
            Duration: (.RunTime * .000000001)
        }
    ]
    | sort_by( .Duration )
    | reverse
' | head -n 1000
[
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when nodes register provider id",
    "Duration": 201.822036709
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a node isn't tracked",
    "Duration": 101.477449167
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when nodes don't have provider id",
    "Duration": 101.474290916
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when all nodes are tracked",
    "Duration": 101.436484167
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when a combination of nodeclaims and node are tracked",
    "Duration": 53.380126292
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when the representation of nodes is the same",
    "Duration": 51.462873583000004
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when a new node is added after the initial sync",
    "Duration": 51.249976291
  },
  {
    "Context": "Node Resource Level",
    "Test": "should nominate the node until the nomination time passes",
    "Duration": 21.214906542
  },
  {
    "Context": "Data Races",
    "Test": "should ensure that calling Synced() is valid while making updates to Nodes",
    "Duration": 10.200998417000001
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a nodeclaim isn't tracked",
    "Duration": 5.991445583
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when all nodeclaims are tracked",
    "Duration": 4.708709875
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a nodeclaim hasn't resolved its provider id",
    "Duration": 4.649389958
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should only return daemonset pods from the daemonset cache",
    "Duration": 2.433895583
  },
  {
    "Context": "Pod Ack",
    "Test": "should only mark pods as schedulable once",
    "Duration": 2.1460760000000003
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should update daemonsetCache with the newest created pod",
    "Duration": 1.0675025
  },
  {
    "Context": "Node Resource Level",
    "Test": "should maintain a correct count of resource usage as pods are deleted/added",
    "Duration": 0.7566735
  },
  {
    "Context": "Data Races",
    "Test": "should ensure that calling Synced() is valid while making updates to NodeClaims",
    "Duration": 0.537835917
  },
  {
    "Context": "Node Resource Level",
    "Test": "should handle a node changing from no providerID to registering a providerID",
    "Duration": 0.25278754200000003
  },
  {
    "Context": "Node Resource Level",
    "Test": "should track pods correctly if we miss events or they are consolidated",
    "Duration": 0.22724291600000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should mark node for deletion when node is deleted",
    "Duration": 0.22465641700000002
  },
  {
    "Context": "Volume Usage/Limits",
    "Test": "should hydrate the volume usage on a Node update",
    "Duration": 0.22363137500000002
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a nodeclaim without a providerID is deleted",
    "Duration": 0.222831041
  },
  {
    "Context": "Volume Usage/Limits",
    "Test": "should maintain the volume usage state when receiving NodeClaim updates",
    "Duration": 0.21953766700000002
  },
  {
    "Context": "Volume Usage/Limits",
    "Test": "should ignore the volume usage limits breach if the pod update is for an already tracked pod",
    "Duration": 0.21275120800000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should mark node for deletion when nodeclaim is deleted",
    "Duration": 0.209749333
  },
  {
    "Context": "Taints/Managed",
    "Test": "should consider startup taints on a managed node that isn't initialized",
    "Duration": 0.20619975000000001
  },
  {
    "Context": "Pod Healthy NodePool",
    "Test": "should delete the pod schedulable time if the pod is deleted",
    "Duration": 0.205383416
  },
  {
    "Context": "Taints/Managed",
    "Test": "should consider ephemeral taints on a managed node after the node is initialized",
    "Duration": 0.20517875000000002
  },
  {
    "Context": "Pod Anti-Affinity",
    "Test": "should handle events out of order",
    "Duration": 0.20481141700000002
  },
  {
    "Context": "Pod Anti-Affinity",
    "Test": "should stop tracking pods with required anti-affinity if the pod is deleted",
    "Duration": 0.204684041
  },
  {
    "Context": "HostPort Usage",
    "Test": "should ignore the host port usage conflict if the pod update is for an already tracked pod",
    "Duration": 0.20450341700000002
  },
  {
    "Context": "Taints/Unmanaged",
    "Test": "should consider ephemeral taints on an unmanaged node that isn't initialized",
    "Duration": 0.203996334
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should not update daemonsetCache when daemonset pod is not present",
    "Duration": 0.20390591700000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should count new pods bound to nodes",
    "Duration": 0.20370375000000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should track daemonset requested resources separately",
    "Duration": 0.20348437500000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should not add requests if the pod is terminal",
    "Duration": 0.20331995900000002
  },
  {
    "Context": "Pod Anti-Affinity",
    "Test": "should track pods with required anti-affinity",
    "Duration": 0.20288845800000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should subtract requests if the pod is deleted",
    "Duration": 0.202450916
  },
  {
    "Context": "Pod Healthy NodePool",
    "Test": "should not store pod schedulable time if the nodePool that pod is scheduled to does not have NodeRegistrationHealthy=true",
    "Duration": 0.20173200000000002
  },
  {
    "Context": "Pod Healthy NodePool",
    "Test": "should store pod schedulable time if the nodePool that pod is scheduled to has NodeRegistrationHealthy=true",
    "Duration": 0.20115470800000002
  },
  {
    "Context": "Pod Healthy NodePool",
    "Test": "should not update the pod schedulable time if it is already stored for a pod",
    "Duration": 0.200633417
  },
  {
    "Context": "HostPort Usage",
    "Test": "should hydrate the HostPort usage on a Node update",
    "Duration": 0.200623833
  },
  {
    "Context": "Taints/Unmanaged",
    "Test": "should consider ephemeral taints on an unmanaged node after the node is initialized",
    "Duration": 0.20019529100000003
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should update daemonsetCache when daemonset pod is created",
    "Duration": 0.200184084
  },
  {
    "Context": "HostPort Usage",
    "Test": "should maintain the host port usage state when receiving NodeClaim updates",
    "Duration": 0.20014462500000002
  },
  {
    "Context": "Pod Anti-Affinity",
    "Test": "should not track pods with preferred anti-affinity",
    "Duration": 0.200018791
  },
  {
    "Context": "Taints/Managed",
    "Test": "should consider startup taints on a managed node after the node is initialized",
    "Duration": 0.199725708
  },
  {
    "Context": "Consolidated State",
    "Test": "should cause consolidation state to change when a NodePool is updated",
    "Duration": 0.19933329200000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should count existing pods bound to nodes",
    "Duration": 0.19877312500000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should not count pods not bound to nodes",
    "Duration": 0.19083745900000001
  },
  {
    "Context": "Node Resource Level",
    "Test": "should stop tracking nodes that are deleted",
    "Duration": 0.127762958
  },
  {
    "Context": "Taints/Managed",
    "Test": "should not consider ephemeral taints on a managed node that isn't initialized",
    "Duration": 0.12012875
  },
  {
    "Context": "Node Deletion",
    "Test": "should not leak a state node when the NodeClaim and Node names match",
    "Duration": 0.11925704100000001
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should emit cluster_state_unsynced_time_seconds metric when cluster state is unsynced",
    "Duration": 0.11726162500000001
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a nodeclaim is added manually with UpdateNodeClaim",
    "Duration": 0.115658333
  },
  {
    "Context": "Consolidated State",
    "Test": "should update the consolidated value when setting consolidation",
    "Duration": 0.1120165
  },
  {
    "Context": "Consolidated State",
    "Test": "should update the consolidated value when state timeout (5m) has passed and state hasn't changed",
    "Duration": 0.10490637500000001
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should delete daemonset in cache when daemonset is deleted",
    "Duration": 0.10126900000000001
  }
]

After

❯ ginkgo --json-report=report.json ./pkg/controllers/state/...
Running Suite: Controllers/State - /Users/schalor/github.com/testing/karpenter/pkg/controllers/state
====================================================================================================
Random Seed: 1741370650

Will run 58 of 58 specs
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

Ran 58 of 58 Specs in 71.118 seconds
SUCCESS! -- 58 Passed | 0 Failed | 0 Pending | 0 Skipped
PASS

Ginkgo ran 1 suite in 1m20.935683667s
Test Suite Passed
~/github.com/testing/karpenter speed-up-cluster-state-tests                                                                                                                                   1m 21s 10:05:31
❯ cat report.json | jq -r '                                   
    [
        .[].SpecReports.[]?
        | select( .State | match("passed") )
        | select( .LeafNodeType | match("It"))
        | {
            Context: ( .ContainerHierarchyTexts | join("/")),
            Test: .LeafNodeText,
            Duration: (.RunTime * .000000001)
        }
    ]
    | sort_by( .Duration )
    | reverse
' | head -n 1000
[
  {
    "Context": "Node Resource Level",
    "Test": "should nominate the node until the nomination time passes",
    "Duration": 21.208430875
  },
  {
    "Context": "Data Races",
    "Test": "should ensure that calling Synced() is valid while making updates to Nodes",
    "Duration": 10.240311375000001
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when nodes register provider id",
    "Duration": 2.669562291
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when a combination of nodeclaims and node are tracked",
    "Duration": 2.564343584
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should only return daemonset pods from the daemonset cache",
    "Duration": 2.4232065840000003
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a nodeclaim hasn't resolved its provider id",
    "Duration": 2.383607209
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a nodeclaim isn't tracked",
    "Duration": 2.382122125
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when all nodeclaims are tracked",
    "Duration": 2.2519635
  },
  {
    "Context": "Consolidated State",
    "Test": "should update the consolidated value when setting consolidation",
    "Duration": 2.062912041
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when the representation of nodes is the same",
    "Duration": 1.997521834
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a node isn't tracked",
    "Duration": 1.9787601250000002
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when nodes don't have provider id",
    "Duration": 1.9313523330000002
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when all nodes are tracked",
    "Duration": 1.9279494590000001
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should consider the cluster state synced when a new node is added after the initial sync",
    "Duration": 1.5417767910000002
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should update daemonsetCache with the newest created pod",
    "Duration": 1.1516110830000001
  },
  {
    "Context": "Node Resource Level",
    "Test": "should maintain a correct count of resource usage as pods are deleted/added",
    "Duration": 0.708601875
  },
  {
    "Context": "Data Races",
    "Test": "should ensure that calling Synced() is valid while making updates to NodeClaims",
    "Duration": 0.5482551250000001
  },
  {
    "Context": "Node Resource Level",
    "Test": "should handle a node changing from no providerID to registering a providerID",
    "Duration": 0.314837584
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a nodeclaim without a providerID is deleted",
    "Duration": 0.304889792
  },
  {
    "Context": "Volume Usage/Limits",
    "Test": "should hydrate the volume usage on a Node update",
    "Duration": 0.275208791
  },
  {
    "Context": "Node Resource Level",
    "Test": "should mark node for deletion when node is deleted",
    "Duration": 0.232294291
  },
  {
    "Context": "Pod Anti-Affinity",
    "Test": "should stop tracking pods with required anti-affinity if the pod is deleted",
    "Duration": 0.22770141600000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should count new pods bound to nodes",
    "Duration": 0.21152616700000001
  },
  {
    "Context": "Node Resource Level",
    "Test": "should not count pods not bound to nodes",
    "Duration": 0.209881375
  },
  {
    "Context": "Node Resource Level",
    "Test": "should mark node for deletion when nodeclaim is deleted",
    "Duration": 0.20678041700000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should not add requests if the pod is terminal",
    "Duration": 0.20482508300000002
  },
  {
    "Context": "HostPort Usage",
    "Test": "should maintain the host port usage state when receiving NodeClaim updates",
    "Duration": 0.20430741700000002
  },
  {
    "Context": "Consolidated State",
    "Test": "should cause consolidation state to change when a NodePool is updated",
    "Duration": 0.20406262500000003
  },
  {
    "Context": "HostPort Usage",
    "Test": "should ignore the host port usage conflict if the pod update is for an already tracked pod",
    "Duration": 0.203028167
  },
  {
    "Context": "Pod Ack",
    "Test": "should only mark pods as schedulable once",
    "Duration": 0.202409041
  },
  {
    "Context": "Node Resource Level",
    "Test": "should count existing pods bound to nodes",
    "Duration": 0.202264792
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should not update daemonsetCache when daemonset pod is not present",
    "Duration": 0.20223570800000001
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should update daemonsetCache when daemonset pod is created",
    "Duration": 0.202154042
  },
  {
    "Context": "Pod Anti-Affinity",
    "Test": "should not track pods with preferred anti-affinity",
    "Duration": 0.201773333
  },
  {
    "Context": "Volume Usage/Limits",
    "Test": "should ignore the volume usage limits breach if the pod update is for an already tracked pod",
    "Duration": 0.20157833300000003
  },
  {
    "Context": "Volume Usage/Limits",
    "Test": "should maintain the volume usage state when receiving NodeClaim updates",
    "Duration": 0.20155229200000002
  },
  {
    "Context": "Taints/Unmanaged",
    "Test": "should consider ephemeral taints on an unmanaged node after the node is initialized",
    "Duration": 0.201175708
  },
  {
    "Context": "Node Resource Level",
    "Test": "should track pods correctly if we miss events or they are consolidated",
    "Duration": 0.20116391700000003
  },
  {
    "Context": "Node Resource Level",
    "Test": "should stop tracking nodes that are deleted",
    "Duration": 0.200802333
  },
  {
    "Context": "Taints/Managed",
    "Test": "should consider ephemeral taints on a managed node after the node is initialized",
    "Duration": 0.200752375
  },
  {
    "Context": "DaemonSet Controller",
    "Test": "should delete daemonset in cache when daemonset is deleted",
    "Duration": 0.20070437500000002
  },
  {
    "Context": "Taints/Managed",
    "Test": "should consider startup taints on a managed node that isn't initialized",
    "Duration": 0.20069245800000002
  },
  {
    "Context": "Taints/Unmanaged",
    "Test": "should consider ephemeral taints on an unmanaged node that isn't initialized",
    "Duration": 0.20056658400000002
  },
  {
    "Context": "Pod Anti-Affinity",
    "Test": "should track pods with required anti-affinity",
    "Duration": 0.20037275000000002
  },
  {
    "Context": "Pod Healthy NodePool",
    "Test": "should not store pod schedulable time if the nodePool that pod is scheduled to does not have NodeRegistrationHealthy=true",
    "Duration": 0.20021987500000002
  },
  {
    "Context": "Pod Healthy NodePool",
    "Test": "should store pod schedulable time if the nodePool that pod is scheduled to has NodeRegistrationHealthy=true",
    "Duration": 0.200219625
  },
  {
    "Context": "Taints/Managed",
    "Test": "should consider startup taints on a managed node after the node is initialized",
    "Duration": 0.19995995800000002
  },
  {
    "Context": "Node Resource Level",
    "Test": "should subtract requests if the pod is deleted",
    "Duration": 0.19926991600000002
  },
  {
    "Context": "Consolidated State",
    "Test": "should update the consolidated value when state timeout (5m) has passed and state hasn't changed",
    "Duration": 0.19294191700000002
  },
  {
    "Context": "HostPort Usage",
    "Test": "should hydrate the HostPort usage on a Node update",
    "Duration": 0.19069612500000002
  },
  {
    "Context": "Cluster State Sync",
    "Test": "should emit cluster_state_unsynced_time_seconds metric when cluster state is unsynced",
    "Duration": 0.144736042
  },
  {
    "Context": "Node Resource Level",
    "Test": "should track daemonset requested resources separately",
    "Duration": 0.131366375
  },
  {
    "Context": "Pod Anti-Affinity",
    "Test": "should handle events out of order",
    "Duration": 0.12311008300000001
  },
  {
    "Context": "Taints/Managed",
    "Test": "should not consider ephemeral taints on a managed node that isn't initialized",
    "Duration": 0.12255966700000001
  },
  {
    "Context": "Node Deletion",
    "Test": "should not leak a state node when the NodeClaim and Node names match",
    "Duration": 0.11937995800000001
  },
  {
    "Context": "Pod Healthy NodePool",
    "Test": "should not update the pod schedulable time if it is already stored for a pod",
    "Duration": 0.10301270800000001
  },
  {
    "Context": "Pod Healthy NodePool",
    "Test": "should delete the pod schedulable time if the pod is deleted",
    "Duration": 0.100320083
  },
  {
    "Context": "Cluster State Sync",
    "Test": "shouldn't consider the cluster state synced if a nodeclaim is added manually with UpdateNodeClaim",
    "Duration": 0.014715750000000001
  }
]

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 7, 2025
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Mar 7, 2025
@rschalo rschalo changed the title Speed up cluster state tests test: Improved Cluster State Test Suite duration from 727 seconds to 71 seconds Mar 7, 2025
@coveralls
Copy link

coveralls commented Mar 7, 2025

Pull Request Test Coverage Report for Build 13727202524

Warning: This coverage report may be inaccurate.

This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.

Details

  • 8 of 8 (100.0%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.06%) to 81.627%

Totals Coverage Status
Change from base Build 13716264738: 0.06%
Covered Lines: 9601
Relevant Lines: 11762

💛 - Coveralls

@rschalo rschalo marked this pull request as ready for review March 7, 2025 18:22
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 7, 2025
Copy link
Member

@jonathan-innis jonathan-innis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 7, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jonathan-innis, rschalo

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 7, 2025
@k8s-ci-robot k8s-ci-robot merged commit efa12a3 into kubernetes-sigs:main Mar 7, 2025
14 checks passed
@rschalo rschalo deleted the speed-up-cluster-state-tests branch March 7, 2025 18:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants