Skip to content

added informer manager#208

Merged
cirisked merged 8 commits intomainfrom
informer-manager
Jan 12, 2026
Merged

added informer manager#208
cirisked merged 8 commits intomainfrom
informer-manager

Conversation

@cirisked
Copy link
Copy Markdown
Contributor

@cirisked cirisked commented Jan 9, 2026

No description provided.

@cirisked cirisked requested a review from a team as a code owner January 9, 2026 10:12
// Start starts the informer factory and waits for all caches to sync.
// This method blocks until caches are synchronized or the context is canceled.
func (m *Manager) Start(ctx context.Context) error {
m.mu.Lock()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

locking seems to be excessive here as it should be started once on startup and stopped later. Consider just removing it

m.mu.Lock()
defer m.mu.Unlock()

if m.started {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the informers have been already started, m.cancelFunc will not be nil, so it's enough just to check for that. Also based on anticipated usage, it seems also excessive

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would like to disagree - i prefer verbosity over the context

}()

m.log.Info("starting shared informer factory...")
m.factory.Start(stopCh)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ctx.Done() can be passed instead of dedicated stopCh. And that go routine can be removed as in Stop() m.factory.Shutdown() can be used to wait for the informers to stop when the ctx is canceled

m.log.Info("waiting for informer caches to sync...")
if !cache.WaitForCacheSync(syncCtx.Done(), m.nodes.HasSynced, m.pods.HasSynced) {
cancel()
metrics.IncrementInformerCacheSyncs("node", "failure")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these metrics are excessive because if the start fails on startup, then the whole process fails, right?

}

// IsStarted returns true if the informer manager has been started and caches are synced.
func (m *Manager) IsStarted() bool {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have a real use case for this?

clientset := fake.NewClientset(node, pod)
manager := NewManager(log, clientset, 0)

ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider using t.Context() instead of context.Background()

clientset := fake.NewClientset()
manager := NewManager(log, clientset, 0)

ctx, cancel := context.WithCancel(context.Background())
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems table tests could be used for this case instead of having two separate tests which mostly redundant logic

"k8s.io/client-go/kubernetes/fake"
)

func TestNodeInformer_Informer(t *testing.T) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like these all methods could be checked in a single test?

@cirisked cirisked merged commit 964694f into main Jan 12, 2026
3 checks passed
@cirisked cirisked deleted the informer-manager branch January 12, 2026 13:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants