Conditions/policy poolconfig#1013
Conditions/policy poolconfig#1013SchSeba wants to merge 15 commits intok8snetworkplumbingwg:masterfrom
Conversation
|
Thanks for your PR,
To skip the vendors CIs, Maintainers can use one of:
|
Summary of ChangesHello @SchSeba, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the SR-IOV Network Operator's ability to communicate the health and operational state of its resources. By adopting standard Kubernetes conditions and introducing aggregation mechanisms, users gain clearer insights into the readiness, progress, and any issues affecting their SR-IOV deployments. This standardization facilitates better monitoring, automated responses, and aligns the operator more closely with Kubernetes best practices. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive integration of standard Kubernetes conditions into the status of various SR-IOV Network Operator Custom Resource Definitions (CRDs), including SriovNetworkNodeState, SriovNetworkNodePolicy, and SriovNetworkPoolConfig. New condition types like Progressing, Degraded, Ready, and specific Drain conditions (DrainProgressing, DrainDegraded, DrainComplete) are defined, along with a rich set of reasons for these conditions. Helper functions for managing and comparing conditions (ConditionsEqual, SetConfigurationConditions, SetDrainConditions) are added to the API. Dedicated status controllers (SriovNetworkNodePolicyStatusReconciler, SriovNetworkPoolConfigStatusReconciler) are introduced to aggregate conditions from matching SriovNetworkNodeState objects and update the status of policies and pool configurations, including MatchedNodeCount and ReadyNodeCount. The drain controller is updated to leverage these new drain-specific conditions, including a new DrainErrorCallback to capture immediate errors during node draining. CRD definitions and deepcopy functions are updated accordingly, and extensive unit and integration tests are added for the new condition logic and status controllers. A new pkg/status package was added for status updates with retries and event emissions, but a review comment notes that this package is currently unused by the controllers and suggests either leveraging it or removing it. Another review comment points out that the updateDrainConditions function in the drain controller lacks a retry loop for status updates, which could lead to conflicts, and provides a suggested fix using retry.RetryOnConflict.
| func (dr *DrainReconcile) updateDrainConditions(ctx context.Context, nodeNetworkState *sriovnetworkv1.SriovNetworkNodeState, state sriovnetworkv1.DrainState, errorMessage string) error { | ||
| reqLogger := ctx.Value(constants.LoggerContextKey).(logr.Logger).WithName("updateDrainConditions") | ||
|
|
||
| // Get the latest version of the nodeNetworkState | ||
| latestState := &sriovnetworkv1.SriovNetworkNodeState{} | ||
| if err := dr.Get(ctx, client.ObjectKey{Namespace: nodeNetworkState.Namespace, Name: nodeNetworkState.Name}, latestState); err != nil { | ||
| return err | ||
| } | ||
|
|
||
| // Create a copy for the patch base | ||
| currentState := latestState.DeepCopy() | ||
|
|
||
| // Store old conditions for comparison | ||
| oldConditions := make([]metav1.Condition, len(latestState.Status.Conditions)) | ||
| copy(oldConditions, latestState.Status.Conditions) | ||
|
|
||
| // Set drain conditions - SetDrainConditions already sets the correct ObservedGeneration | ||
| // for each drain condition it creates | ||
| latestState.SetDrainConditions(state, errorMessage) | ||
|
|
||
| // Only update if conditions actually changed (ignore LastTransitionTime for comparison) | ||
| if sriovnetworkv1.ConditionsEqual(oldConditions, latestState.Status.Conditions) { | ||
| reqLogger.V(2).Info("drain conditions unchanged, skipping update", "state", state) | ||
| return nil | ||
| } | ||
|
|
||
| // Use Patch instead of Update to avoid overwriting configuration conditions | ||
| // that may have been set by the daemon concurrently | ||
| if err := dr.Status().Patch(ctx, latestState, client.MergeFrom(currentState)); err != nil { | ||
| reqLogger.Error(err, "failed to update drain conditions") | ||
| return err | ||
| } | ||
|
|
||
| reqLogger.V(2).Info("updated drain conditions", "state", state) | ||
| return nil | ||
| } |
There was a problem hiding this comment.
The updateDrainConditions function performs a get-then-patch operation without a retry loop. This could lead to failures if the SriovNetworkNodeState object is updated by another controller (like the daemon) between the Get and Patch calls, causing an API conflict. It's best practice to wrap status updates in a retry loop, like retry.RetryOnConflict, to handle these conflicts gracefully. You'll need to add an import for k8s.io/client-go/util/retry.
Here is a suggested implementation:
func (dr *DrainReconcile) updateDrainConditions(ctx context.Context, nodeNetworkState *sriovnetworkv1.SriovNetworkNodeState, state sriovnetworkv1.DrainState, errorMessage string) error {
reqLogger := ctx.Value(constants.LoggerContextKey).(logr.Logger).WithName("updateDrainConditions")
return retry.RetryOnConflict(retry.DefaultRetry, func() error {
// Get the latest version of the nodeNetworkState
latestState := &sriovnetworkv1.SriovNetworkNodeState{}
if err := dr.Get(ctx, client.ObjectKey{Namespace: nodeNetworkState.Namespace, Name: nodeNetworkState.Name}, latestState); err != nil {
return err
}
// Create a copy for the patch base
currentState := latestState.DeepCopy()
// Store old conditions for comparison
oldConditions := make([]metav1.Condition, len(latestState.Status.Conditions))
copy(oldConditions, latestState.Status.Conditions)
// Set drain conditions - SetDrainConditions already sets the correct ObservedGeneration
// for each drain condition it creates
latestState.SetDrainConditions(state, errorMessage)
// Only update if conditions actually changed (ignore LastTransitionTime for comparison)
if sriovnetworkv1.ConditionsEqual(oldConditions, latestState.Status.Conditions) {
reqLogger.V(2).Info("drain conditions unchanged, skipping update", "state", state)
return nil
}
// Use Patch instead of Update to avoid overwriting configuration conditions
// that may have been set by the daemon concurrently
if err := dr.Status().Patch(ctx, latestState, client.MergeFrom(currentState)); err != nil {
reqLogger.Error(err, "failed to update drain conditions")
return err
}
reqLogger.V(2).Info("updated drain conditions", "state", state)
return nil
})
}| /* | ||
| Copyright 2021. | ||
|
|
||
| Licensed under the Apache License, Version 2.0 (the "License"); | ||
| you may not use this file except in compliance with the License. | ||
| You may obtain a copy of the License at | ||
|
|
||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
|
|
||
| Unless required by applicable law or agreed to in writing, software | ||
| distributed under the License is distributed on an "AS IS" BASIS, | ||
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| See the License for the specific language governing permissions and | ||
| limitations under the License. | ||
| */ | ||
|
|
||
| package status | ||
|
|
||
| import ( | ||
| "context" | ||
| "fmt" | ||
|
|
||
| "k8s.io/apimachinery/pkg/api/errors" | ||
| "k8s.io/apimachinery/pkg/api/meta" | ||
| metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
| "k8s.io/apimachinery/pkg/runtime" | ||
| "k8s.io/client-go/tools/record" | ||
| "sigs.k8s.io/controller-runtime/pkg/client" | ||
| ) | ||
|
|
||
| const ( | ||
| // MaxRetries is the maximum number of retries for status updates on conflict | ||
| MaxRetries = 3 | ||
| ) | ||
|
|
||
| // Patcher provides methods for updating resource status with retry logic and event emission | ||
| // | ||
| //go:generate ../../bin/mockgen -destination mock/mock_patcher.go -source patcher.go | ||
| type Interface interface { | ||
| // UpdateStatus updates the status of a resource with retry on conflict | ||
| UpdateStatus(ctx context.Context, obj client.Object, updateFunc func() error) error | ||
|
|
||
| // PatchStatus patches the status of a resource with retry on conflict | ||
| PatchStatus(ctx context.Context, obj client.Object, patch client.Patch) error | ||
|
|
||
| // UpdateStatusWithEvents updates status and emits events for condition transitions | ||
| UpdateStatusWithEvents(ctx context.Context, obj client.Object, oldConditions, newConditions []metav1.Condition, updateFunc func() error) error | ||
|
|
||
| // Condition management methods (delegated to embedded condition manager) | ||
|
|
||
| // SetCondition sets a condition with the given type, status, reason, and message | ||
| SetCondition(conditions *[]metav1.Condition, conditionType string, status metav1.ConditionStatus, reason, message string, generation int64) | ||
|
|
||
| // IsConditionTrue checks if a condition exists and has status True | ||
| IsConditionTrue(conditions []metav1.Condition, conditionType string) bool | ||
|
|
||
| // IsConditionFalse checks if a condition exists and has status False | ||
| IsConditionFalse(conditions []metav1.Condition, conditionType string) bool | ||
|
|
||
| // IsConditionUnknown checks if a condition exists and has status Unknown | ||
| IsConditionUnknown(conditions []metav1.Condition, conditionType string) bool | ||
|
|
||
| // FindCondition returns the condition with the given type, or nil if not found | ||
| FindCondition(conditions []metav1.Condition, conditionType string) *metav1.Condition | ||
|
|
||
| // RemoveCondition removes the condition with the given type | ||
| RemoveCondition(conditions *[]metav1.Condition, conditionType string) | ||
|
|
||
| // HasConditionChanged checks if the new condition differs from the existing one | ||
| HasConditionChanged(conditions []metav1.Condition, conditionType string, status metav1.ConditionStatus, reason, message string) bool | ||
| } | ||
|
|
||
| // Patcher is the production implementation of Patcher | ||
| type Patcher struct { | ||
| client client.Client | ||
| recorder record.EventRecorder | ||
| scheme *runtime.Scheme | ||
| } | ||
|
|
||
| // NewPatcher creates a new Patcher instance | ||
| func NewPatcher(client client.Client, recorder record.EventRecorder, scheme *runtime.Scheme) Interface { | ||
| return &Patcher{ | ||
| client: client, | ||
| recorder: recorder, | ||
| scheme: scheme, | ||
| } | ||
| } | ||
|
|
||
| // UpdateStatus updates the status of a resource with retry on conflict | ||
| func (p *Patcher) UpdateStatus(ctx context.Context, obj client.Object, updateFunc func() error) error { | ||
| var lastErr error | ||
|
|
||
| for i := 0; i < MaxRetries; i++ { | ||
| // Apply the update function | ||
| if err := updateFunc(); err != nil { | ||
| return fmt.Errorf("failed to apply status update: %w", err) | ||
| } | ||
|
|
||
| // Try to update the status | ||
| if err := p.client.Status().Update(ctx, obj); err != nil { | ||
| if errors.IsConflict(err) { | ||
| // Conflict, retry after fetching latest version | ||
| lastErr = err | ||
| if err := p.client.Get(ctx, client.ObjectKeyFromObject(obj), obj); err != nil { | ||
| return fmt.Errorf("failed to fetch latest version for retry: %w", err) | ||
| } | ||
| continue | ||
| } | ||
| return fmt.Errorf("failed to update status: %w", err) | ||
| } | ||
|
|
||
| // Success | ||
| return nil | ||
| } | ||
|
|
||
| return fmt.Errorf("failed to update status after %d retries: %w", MaxRetries, lastErr) | ||
| } | ||
|
|
||
| // PatchStatus patches the status of a resource with retry on conflict | ||
| func (p *Patcher) PatchStatus(ctx context.Context, obj client.Object, patch client.Patch) error { | ||
| var lastErr error | ||
|
|
||
| for i := 0; i < MaxRetries; i++ { | ||
| if err := p.client.Status().Patch(ctx, obj, patch); err != nil { | ||
| if errors.IsConflict(err) { | ||
| // Conflict, retry after fetching latest version | ||
| lastErr = err | ||
| if err := p.client.Get(ctx, client.ObjectKeyFromObject(obj), obj); err != nil { | ||
| return fmt.Errorf("failed to fetch latest version for retry: %w", err) | ||
| } | ||
| continue | ||
| } | ||
| return fmt.Errorf("failed to patch status: %w", err) | ||
| } | ||
|
|
||
| // Success | ||
| return nil | ||
| } | ||
|
|
||
| return fmt.Errorf("failed to patch status after %d retries: %w", MaxRetries, lastErr) | ||
| } | ||
|
|
||
| // UpdateStatusWithEvents updates status using Patch and emits events for condition transitions. | ||
| // Using Patch instead of Update ensures that only the modified fields are updated, | ||
| // preventing race conditions where concurrent updates could overwrite each other's changes. | ||
| func (p *Patcher) UpdateStatusWithEvents(ctx context.Context, obj client.Object, oldConditions, newConditions []metav1.Condition, updateFunc func() error) error { | ||
| var lastErr error | ||
|
|
||
| for i := 0; i < MaxRetries; i++ { | ||
| // Create a deep copy before modifications to use as patch base | ||
| baseCopy := obj.DeepCopyObject().(client.Object) | ||
|
|
||
| // Apply the update function | ||
| if err := updateFunc(); err != nil { | ||
| return fmt.Errorf("failed to apply status update: %w", err) | ||
| } | ||
|
|
||
| // Use Patch instead of Update to avoid overwriting concurrent changes | ||
| if err := p.client.Status().Patch(ctx, obj, client.MergeFrom(baseCopy)); err != nil { | ||
| if errors.IsConflict(err) { | ||
| // Conflict, retry after fetching latest version | ||
| lastErr = err | ||
| if err := p.client.Get(ctx, client.ObjectKeyFromObject(obj), obj); err != nil { | ||
| return fmt.Errorf("failed to fetch latest version for retry: %w", err) | ||
| } | ||
| continue | ||
| } | ||
| return fmt.Errorf("failed to patch status: %w", err) | ||
| } | ||
|
|
||
| // Success - emit events for condition transitions | ||
| if p.recorder != nil { | ||
| transitions := DetectTransitions(oldConditions, newConditions) | ||
| for _, transition := range transitions { | ||
| if transition.Type != TransitionUnchanged { | ||
| p.recorder.Event(obj, transition.EventType(), transition.EventReason(), transition.Message) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return nil | ||
| } | ||
|
|
||
| return fmt.Errorf("failed to patch status after %d retries: %w", MaxRetries, lastErr) | ||
| } | ||
|
|
||
| // Condition management methods | ||
|
|
||
| // SetCondition sets a condition with the given type, status, reason, and message | ||
| func (p *Patcher) SetCondition(conditions *[]metav1.Condition, conditionType string, status metav1.ConditionStatus, reason, message string, generation int64) { | ||
| condition := metav1.Condition{ | ||
| Type: conditionType, | ||
| Status: status, | ||
| Reason: reason, | ||
| Message: message, | ||
| ObservedGeneration: generation, | ||
| } | ||
| meta.SetStatusCondition(conditions, condition) | ||
| } | ||
|
|
||
| // IsConditionTrue checks if a condition exists and has status True | ||
| func (p *Patcher) IsConditionTrue(conditions []metav1.Condition, conditionType string) bool { | ||
| condition := meta.FindStatusCondition(conditions, conditionType) | ||
| return condition != nil && condition.Status == metav1.ConditionTrue | ||
| } | ||
|
|
||
| // IsConditionFalse checks if a condition exists and has status False | ||
| func (p *Patcher) IsConditionFalse(conditions []metav1.Condition, conditionType string) bool { | ||
| condition := meta.FindStatusCondition(conditions, conditionType) | ||
| return condition != nil && condition.Status == metav1.ConditionFalse | ||
| } | ||
|
|
||
| // IsConditionUnknown checks if a condition exists and has status Unknown | ||
| func (p *Patcher) IsConditionUnknown(conditions []metav1.Condition, conditionType string) bool { | ||
| condition := meta.FindStatusCondition(conditions, conditionType) | ||
| return condition != nil && condition.Status == metav1.ConditionUnknown | ||
| } | ||
|
|
||
| // FindCondition returns the condition with the given type, or nil if not found | ||
| func (p *Patcher) FindCondition(conditions []metav1.Condition, conditionType string) *metav1.Condition { | ||
| return meta.FindStatusCondition(conditions, conditionType) | ||
| } | ||
|
|
||
| // RemoveCondition removes the condition with the given type | ||
| func (p *Patcher) RemoveCondition(conditions *[]metav1.Condition, conditionType string) { | ||
| meta.RemoveStatusCondition(conditions, conditionType) | ||
| } | ||
|
|
||
| // HasConditionChanged checks if the new condition differs from the existing one | ||
| // Returns true if the condition doesn't exist or if status, reason, or message differ | ||
| func (p *Patcher) HasConditionChanged(conditions []metav1.Condition, conditionType string, status metav1.ConditionStatus, reason, message string) bool { | ||
| existing := meta.FindStatusCondition(conditions, conditionType) | ||
| if existing == nil { | ||
| return true | ||
| } | ||
| return existing.Status != status || existing.Reason != reason || existing.Message != message | ||
| } |
There was a problem hiding this comment.
This new pkg/status package provides a great abstraction for handling status updates with retries and event emissions. However, it appears to be unused in this pull request. For example, controllers/drain_controller_helper.go implements its own status update logic without using this patcher. To improve code reuse and maintainability, this new utility should be leveraged by the controllers. As it stands, this is dead code and should either be used or removed.
Pull Request Test Coverage Report for Build 20987704551Details
💛 - Coveralls |
Signed-off-by: Sebastian Sch <sebassch@gmail.com>
Add foundational condition support for SR-IOV Network Operator CRDs: - Define standard condition types (Ready, Progressing, Degraded) - Define drain-specific condition types (DrainProgressing, DrainDegraded, DrainComplete) - Add DrainState enum for tracking drain operation states - Add common condition reasons for various states - Add NetworkStatus type with Conditions field for network CRDs - Add ConditionsEqual helper for comparing conditions ignoring LastTransitionTime - Add SetConfigurationConditions and SetDrainConditions methods - Add comprehensive unit tests for condition handling This follows the Kubernetes API conventions for status conditions and provides the foundation for observability improvements across all SR-IOV CRDs. Signed-off-by: Sebastian Sch <sebassch@gmail.com>
Add a new status package with utilities for managing CRD status updates: Patcher (patcher.go): - Provides retry logic for status updates with conflict handling - Supports both Update and Patch operations - Includes UpdateStatusWithEvents for automatic event emission - Embeds condition management methods for convenience - Uses interface for easy mocking in tests Transitions (transitions.go): - DetectTransitions compares old and new conditions - Returns structured Transition objects for each change - EventType() method returns appropriate event type (Normal/Warning) - EventReason() generates suitable event reason strings - Supports Added, Changed, Removed, and Unchanged transitions Tests: - Comprehensive unit tests for patcher operations - Tests for transition detection logic - Tests for event type and reason generation This package provides a reusable foundation for consistent status handling across all controllers in the operator. Signed-off-by: Sebastian Sch <sebassch@gmail.com>
Update SriovNetworkNodeState status type to include Kubernetes conditions: - Add Conditions field to status - Add printcolumns for Ready, Progressing, Degraded - Add printcolumns for drain conditions (DrainProgress, DrainDegraded, DrainComplete) Conditions follow Kubernetes conventions with patchMergeKey and listType annotations for proper strategic merge patch support. Signed-off-by: Sebastian Sch <sebassch@gmail.com>
Update config daemon status handling to set configuration conditions: - Call SetConfigurationConditions when updating node state sync status - Preserve drain conditions during configuration status updates - Use ConditionsEqual to avoid unnecessary API calls when unchanged - Set Ready=True, Progressing=False, Degraded=False on success - Set Progressing=True, Ready=False on in-progress configuration - Set Degraded=True with error message on configuration failure - Keep Degraded=True while retrying after previous failure This ensures the daemon properly reports configuration progress and errors through standard Kubernetes conditions, enabling better observability and automation. Signed-off-by: Sebastian Sch <sebassch@gmail.com>
Add comprehensive drain condition management to the drain controller: Drain Controller Changes: - Add updateDrainConditions helper to set drain-specific conditions - Set DrainProgressing=True when drain starts - Set DrainDegraded=True when drain errors occur (e.g., PDB violations) - Set DrainComplete=True when drain completes successfully - Reset all drain conditions to idle state when returning to idle - Use Patch instead of Update to avoid race conditions with daemon Drainer Changes: - Add DrainErrorCallback type for immediate error notification - Call callback when eviction errors occur (before timeout) - Capture first real error (e.g., PDB violation) instead of generic timeout - Pass callback through cordon and drain operations Controller Helper Fix: - Fix DrainStateAnnotationPredicate to use GetAnnotations() not GetLabels() Tests: - Add tests for DrainComplete=True after successful drain - Add tests for DrainDegraded=False after successful drain - Add tests for idle state conditions - Add tests for observedGeneration consistency - Add tests for single node reboot drain conditions - Add tests for error callback invocation This enables users to observe drain progress and errors through standard Kubernetes conditions, improving troubleshooting and enabling automated responses to drain failures. Signed-off-by: Sebastian Sch <sebassch@gmail.com>
Update the kubernetes-conditions-integration.md design document to reflect the actual implementation details: - Add SriovNetworkNodePolicy as a supported CRD - Add drain-specific conditions (DrainProgressing, DrainDegraded, DrainComplete) - Document all condition reasons with actual constant names - Add NetworkStatus shared type used by network CRDs - Document pkg/status package with Patcher interface and transition detection - Add ConditionsEqual helper function documentation - Update API extension examples with actual struct definitions - Add matchedNodeCount/readyNodeCount to Policy and PoolConfig status - Add kubectl output examples showing new columns - Update implementation commits list - Mark document status as 'implemented' Signed-off-by: Sebastian Sch <sebassch@gmail.com>
- Add SetConfigurationConditions and SetDrainConditions methods to helper.go - Add findCondition helper function to suite_test.go - Regenerate CRDs and deepcopy for SriovNetworkNodeState type Signed-off-by: Sebastian Sch <sebassch@gmail.com>
The optimization to skip unnecessary API calls in updateSyncState was only checking SyncStatus, LastSyncError, and Conditions. This caused updates to Status.Interfaces, Status.Bridges, and Status.System to be skipped when those fields changed but the other fields remained the same. This broke ExternallyManaged policy validation because when VFs were manually configured on the host, the daemon would discover the new NumVfs value but skip updating the SriovNetworkNodeState status. The webhook would then see stale NumVfs=0 and reject the policy. Add equality checks for Interfaces, Bridges, and System fields to ensure host status changes are properly persisted to the API. Signed-off-by: Cursor AI <noreply@cursor.com>
…oolConfig status Update policy and pool config status types to include Kubernetes conditions: SriovNetworkNodePolicy: - Add Conditions field to status - Add MatchedNodeCount and ReadyNodeCount fields - Add printcolumns for Matched, Ready Nodes, and conditions SriovNetworkPoolConfig: - Add Conditions field to status - Add MatchedNodeCount and ReadyNodeCount fields - Add printcolumns for Matched, Ready Nodes, and conditions All conditions follow Kubernetes conventions with patchMergeKey and listType annotations for proper strategic merge patch support. Signed-off-by: Sebastian Sch <sebassch@gmail.com>
…tion Add dedicated status reconcilers for aggregating node conditions: SriovNetworkNodePolicyStatusReconciler: - Watches SriovNetworkNodePolicy, SriovNetworkNodeState, and Node objects - Aggregates conditions from matching nodes based on nodeSelector - Updates MatchedNodeCount and ReadyNodeCount in status - Sets Ready=True only when ALL matched nodes are ready - Sets Progressing=True when ANY node is progressing - Sets Degraded=True when ANY node is degraded - Sets NoMatchingNodes reason when selector matches nothing SriovNetworkPoolConfigStatusReconciler: - Similar aggregation logic for SriovNetworkPoolConfig - Supports nil nodeSelector (matches all nodes) - Skips OVS hardware offload configs (different purpose) - Uses label selector for node matching Shared Helper Functions: - aggregateNodeConditions: collects condition counts from NodeStates - buildStatusConditions: creates conditions from aggregated counts - isConditionTrue: helper for checking condition status Controller Registration: - Register both status controllers in main.go - Add findCondition helper to suite_test.go for test assertions This enables users to see at-a-glance whether their policies and pool configs are fully applied across all matching nodes. Signed-off-by: Sebastian Sch <sebassch@gmail.com>
…d PoolConfig Add condition assertions to conformance tests: Fixtures: - Add assertCondition helper for verifying condition status - Support SriovNetworkPoolConfig, SriovNetworkNodePolicy Network Pool Tests (test_networkpool.go): - Verify PoolConfig Ready=True after RDMA mode configuration - Verify PoolConfig Progressing=False after configuration completes - Verify MatchedNodeCount is updated correctly - Add test for NoMatchingNodes when selector matches nothing Policy Configuration Tests (test_policy_configuration.go): - Verify Policy Ready=True after VF configuration - Verify Policy Progressing=False after configuration completes - Add condition checks for MTU and partition policies - Add condition checks for VFIO and netdevice policies These tests ensure conditions are correctly set during real cluster operations and validate the end-to-end observability improvements. Signed-off-by: Sebastian Sch <sebassch@gmail.com>
- Remove StatusPatcher from network controllers (not part of this branch) - Remove StatusPatcher from SriovOperatorConfig (not part of this branch) - Remove SriovNetwork case from assertCondition helper (SriovNetwork conditions are in a different branch) - Regenerate CRDs Signed-off-by: Sebastian Sch <sebassch@gmail.com>
Add cleanup logic to remove any existing SriovOperatorConfig before creating a new one in the SriovNetworkNodePolicy controller tests. This fixes a flaky test failure where other test suites may leave a 'default' config behind that conflicts with the SriovNetworkNodePolicy tests when they run with different random seeds. The cleanup logic mirrors what's in sriovoperatorconfig_controller_test.go: - Remove finalizers if present - Delete the object if not already deleted - Wait for the object to be fully gone before creating a new one Signed-off-by: Sebastian Sch <sebassch@gmail.com>
Signed-off-by: Sebastian Sch <sebassch@gmail.com>
4434008 to
dcc3b46
Compare
No description provided.