Skip to content

capacity: fix duplicate topology (attempt #2)#1450

Merged
k8s-ci-robot merged 1 commit intokubernetes-csi:masterfrom
huww98:fix-duplicate-capacity-2
Apr 22, 2026
Merged

capacity: fix duplicate topology (attempt #2)#1450
k8s-ci-robot merged 1 commit intokubernetes-csi:masterfrom
huww98:fix-duplicate-capacity-2

Conversation

@huww98
Copy link
Copy Markdown
Contributor

@huww98 huww98 commented Nov 26, 2025

When the controller starts, 2 sync() call will run simultaneously, one from HasSynced(), another from processNextWorkItem(). Each will produce an instance for the same topology segment, and pass it to callbacks.

This will result in duplicated entries in capacities map, resulting in: either

  • Two CSIStorageCapacity object get created for the same topology, or
  • The same CSIStorageCapacity object get assigned to two keys in capacities map. When one of them is updated, the other one will hold an outdated object and all subsequent update will fail with conflict.

What type of PR is this?

/kind bug

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change
/kind bug
/kind cleanup
/kind design
/kind documentation
/kind failing-test
/kind feature
/kind flake

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Please see also #1435

Does this PR introduce a user-facing change?:

Fixed possible duplicated CSIStorageCapacity and constantly failing update request.

/cc @pohly

@k8s-ci-robot k8s-ci-robot requested a review from pohly November 26, 2025 15:14
@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Nov 26, 2025
Comment thread pkg/capacity/topology/nodes_test.go Outdated
func TestHasSynced(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
client := fakeclientset.NewSimpleClientset()
informerFactory := informers.NewSharedInformerFactory(client, 1*time.Hour)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For why not setting resync period to 0, see kubernetes/kubernetes#133500

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the connection to kubernetes/kubernetes#133500. Can you explain here why a resync period of one hour is useful?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing it's because without kubernetes/kubernetes#133500, a zero resync period would cause reads from neverExitWatch, which isn't valid in a synctest. But it would be nice to explicitly state that.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now we have Kubernetes v0.35 included in the go.mod. We actually can set resync period to 0. So no need special comment now.

Comment on lines +215 to +218
go func() {
<-ctx.Done()
nt.queue.ShutDown()
}()
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

synctest checks for leaked goroutines. So I have to clean it up.

@huww98
Copy link
Copy Markdown
Contributor Author

huww98 commented Nov 26, 2025

We need go 1.25 to use the synctest package :(

Comment thread cmd/csi-provisioner/csi-provisioner.go
@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 15, 2026
@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 14, 2026
@pohly
Copy link
Copy Markdown
Contributor

pohly commented Apr 17, 2026

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 17, 2026
@huww98
Copy link
Copy Markdown
Contributor Author

huww98 commented Apr 17, 2026

Yes, I was waiting for go 1.25 update for synctest. Now we have it, let me rebase and resolve the comments.

When the controller starts, 2 sync() call will run simultaneously, one from HasSynced(), another from processNextWorkItem(). Each will produce an instance for the same topology segment, and pass it to callbacks.

This will result in duplicated entries in capacities map, resulting in: either
- Two CSIStorageCapacity object get created for the same topology, or
- The same CSIStorageCapacity object get assigned to two keys in capacities map. When one of them is updated, the other one will hold an outdated object and all subsequent update will fail with conflict.
@huww98 huww98 force-pushed the fix-duplicate-capacity-2 branch from 1b66f2e to 5f11dc4 Compare April 17, 2026 09:14
// wait for sync.
factoryForNamespace.Start(ctx.Done())
}
if topologyInformer != nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is important that topologyInformer gets started here vs. where it was started before?

If it's important, then let's add a comment explainining why. If it's not important, then let's not change it.

Copy link
Copy Markdown
Contributor Author

@huww98 huww98 Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All other informers are started here. It is nature to start it here.
I think we should only start topologyInformer after factoryForNamespace.Start, to avoid cache.WaitForCacheSync waiting for not-started informers. Wasting a little CPU when waiting for leader election.

There is already a comment just before:

			// Starting is enough, the capacityController and topologyInformer will
			// wait for sync.

Do you think this is enough?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think both approaches would be fine and in general I prefer to avoid drive-by enhancements that aren't related to what a PR is primarily trying to do (in this case "fix duplicate topology"). It causes churn and makes reviews harder.

But we can keep it.

Copy link
Copy Markdown
Contributor

@pohly pohly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

// wait for sync.
factoryForNamespace.Start(ctx.Done())
}
if topologyInformer != nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think both approaches would be fine and in general I prefer to avoid drive-by enhancements that aren't related to what a PR is primarily trying to do (in this case "fix duplicate topology"). It causes churn and makes reviews harder.

But we can keep it.

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 22, 2026
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: huww98, pohly

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 22, 2026
@k8s-ci-robot k8s-ci-robot merged commit 2311410 into kubernetes-csi:master Apr 22, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants