Add configurable clusterCIDR and serviceCIDR for HostedCluster networking#69
Add configurable clusterCIDR and serviceCIDR for HostedCluster networking#69tsorya wants to merge 1 commit intorh-ecosystem-edge:mainfrom
Conversation
…king Allow users to override the default pod CIDR (10.132.0.0/14) and service CIDR (172.31.0.0/16) via optional, immutable fields on the DPFHCPProvisioner spec. Both fields include CEL validation to ensure valid CIDR format and immutability after creation. Signed-off-by: Igal Tsoiref <itsoiref@redhat.com> Made-with: Cursor
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: tsorya The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
WalkthroughAdded two optional, immutable CIDR fields ( Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
internal/controller/hostedcluster/hostedcluster.go (1)
244-261: Consider error-handling CIDR parsing for defense-in-depth.At lines 257 and 260,
ipnet.MustParseCIDR()relies on CRD validation (isCIDRCEL rule) to prevent malformed input. While this validation gates the API server, using error-handling parsing in the reconcile path is a more resilient pattern.Suggested hardening (non-panicking parse)
import ( "context" "fmt" + "net" @@ func buildNetworking(cr *provisioningv1alpha1.DPFHCPProvisioner) hyperv1.ClusterNetworking { clusterCIDR := DefaultClusterCIDR - if cr.Spec.ClusterCIDR != "" { + if cr.Spec.ClusterCIDR != "" { + if _, _, err := net.ParseCIDR(cr.Spec.ClusterCIDR); err == nil { clusterCIDR = cr.Spec.ClusterCIDR + } } @@ serviceCIDR := DefaultServiceCIDR - if cr.Spec.ServiceCIDR != "" { + if cr.Spec.ServiceCIDR != "" { + if _, _, err := net.ParseCIDR(cr.Spec.ServiceCIDR); err == nil { serviceCIDR = cr.Spec.ServiceCIDR + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/controller/hostedcluster/hostedcluster.go` around lines 244 - 261, The code uses ipnet.MustParseCIDR for serviceCIDR and clusterCIDR which can panic; replace these calls with the non-panicking parser (e.g., ipnet.ParseCIDR or equivalent) and handle parse errors inside the reconcile path: parse serviceCIDR and clusterCIDR into local variables, if parsing fails log/process the error and return a controlled reconcile error (or set a Cluster condition) instead of allowing a panic. Update the construction of hyperv1.ClusterNetworking (the ServiceNetwork and ClusterNetwork entries) to use the successfully parsed net.IPNet values and reference clusterCIDR/serviceCIDR parsing failures in the returned error from the function containing this logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/controller/dpfhcpprovisioner_crd_test.go`:
- Around line 753-761: The immutability test currently tries to update
DPFHCPProvisioner.Spec.ClusterCIDR (and similarly ServiceCIDR) after creation
and asserts success; change these test blocks to assert that the update is
rejected instead: fetch the resource (DPFHCPProvisioner), modify ClusterCIDR
(and in the other block ServiceCIDR) on the DeepCopy, call k8sClient.Update(ctx,
updated) and assert that it returns an error (e.g., ShouldNot(Succeed()) or
Should(MatchError(...)) as appropriate) to reflect the “immutable after
creation” contract enforced by the controller.
- Around line 456-473: The invalid-CIDR test cases are false positives because
the DPFHCPProvisioner specs in the invalidCIDRs loop (and the similar block at
524-541) omit the required dpuDeploymentRef/DPUDeploymentRef, causing validation
to fail before CIDR checks; update each failing test spec (the DPFHCPProvisioner
instances created inside the invalidCIDRs loop and the other invalid-case block)
to include a valid DPUDeploymentRef (or dpuDeploymentRef) LocalObjectReference
(Name and Namespace, e.g., matching the test DPU cluster) so the test exercises
CIDR validation rather than failing on the missing deployment reference.
---
Nitpick comments:
In `@internal/controller/hostedcluster/hostedcluster.go`:
- Around line 244-261: The code uses ipnet.MustParseCIDR for serviceCIDR and
clusterCIDR which can panic; replace these calls with the non-panicking parser
(e.g., ipnet.ParseCIDR or equivalent) and handle parse errors inside the
reconcile path: parse serviceCIDR and clusterCIDR into local variables, if
parsing fails log/process the error and return a controlled reconcile error (or
set a Cluster condition) instead of allowing a panic. Update the construction of
hyperv1.ClusterNetworking (the ServiceNetwork and ClusterNetwork entries) to use
the successfully parsed net.IPNet values and reference clusterCIDR/serviceCIDR
parsing failures in the returned error from the function containing this logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7d60e96b-f596-4cfa-a6b0-6de7289e8812
📒 Files selected for processing (5)
api/v1alpha1/dpfhcpprovisioner_types.goconfig/crd/bases/provisioning.dpu.hcp.io_dpfhcpprovisioners.yamlinternal/controller/dpfhcpprovisioner_crd_test.gointernal/controller/hostedcluster/hostedcluster.gointernal/controller/hostedcluster/hostedcluster_test.go
| for _, cidr := range invalidCIDRs { | ||
| provisioner := &provisioningv1alpha1.DPFHCPProvisioner{ | ||
| ObjectMeta: metav1.ObjectMeta{ | ||
| Name: "invalid-cluster-cidr", | ||
| Namespace: "default", | ||
| }, | ||
| Spec: provisioningv1alpha1.DPFHCPProvisionerSpec{ | ||
| DPUClusterRef: provisioningv1alpha1.DPUClusterReference{ | ||
| Name: "test-dpu", | ||
| Namespace: "default", | ||
| }, | ||
| BaseDomain: "test.example.com", | ||
| OCPReleaseImage: "quay.io/openshift-release-dev/ocp-release:4.19.0-ec.5-multi", | ||
| SSHKeySecretRef: corev1.LocalObjectReference{Name: "test-ssh-key"}, | ||
| PullSecretRef: corev1.LocalObjectReference{Name: "test-pull-secret"}, | ||
| ControlPlaneAvailabilityPolicy: hyperv1.SingleReplica, | ||
| ClusterCIDR: cidr, | ||
| }, |
There was a problem hiding this comment.
Invalid CIDR tests are currently false positives.
Both invalid-case specs are missing required dpuDeploymentRef, so failures can occur before CIDR validation is evaluated (e.g., Line [462] and Line [530] blocks). This makes these tests non-diagnostic for CIDR rules.
✅ Suggested test fix
Spec: provisioningv1alpha1.DPFHCPProvisionerSpec{
DPUClusterRef: provisioningv1alpha1.DPUClusterReference{
Name: "test-dpu",
Namespace: "default",
},
+ DPUDeploymentRef: &provisioningv1alpha1.DPUDeploymentReference{
+ Name: "test-dpu-deployment",
+ Namespace: "default",
+ },
BaseDomain: "test.example.com",
@@
err := k8sClient.Create(ctx, provisioner)
Expect(err).To(HaveOccurred(), "ClusterCIDR %q should be invalid", cidr)
+ Expect(err.Error()).To(ContainSubstring("clusterCIDR must be a valid CIDR block"))
@@
Spec: provisioningv1alpha1.DPFHCPProvisionerSpec{
DPUClusterRef: provisioningv1alpha1.DPUClusterReference{
Name: "test-dpu",
Namespace: "default",
},
+ DPUDeploymentRef: &provisioningv1alpha1.DPUDeploymentReference{
+ Name: "test-dpu-deployment",
+ Namespace: "default",
+ },
BaseDomain: "test.example.com",
@@
err := k8sClient.Create(ctx, provisioner)
Expect(err).To(HaveOccurred(), "ServiceCIDR %q should be invalid", cidr)
+ Expect(err.Error()).To(ContainSubstring("serviceCIDR must be a valid CIDR block"))Also applies to: 524-541
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/controller/dpfhcpprovisioner_crd_test.go` around lines 456 - 473,
The invalid-CIDR test cases are false positives because the DPFHCPProvisioner
specs in the invalidCIDRs loop (and the similar block at 524-541) omit the
required dpuDeploymentRef/DPUDeploymentRef, causing validation to fail before
CIDR checks; update each failing test spec (the DPFHCPProvisioner instances
created inside the invalidCIDRs loop and the other invalid-case block) to
include a valid DPUDeploymentRef (or dpuDeploymentRef) LocalObjectReference
(Name and Namespace, e.g., matching the test DPU cluster) so the test exercises
CIDR validation rather than failing on the missing deployment reference.
| Eventually(func() error { | ||
| fresh := &provisioningv1alpha1.DPFHCPProvisioner{} | ||
| if err := k8sClient.Get(ctx, types.NamespacedName{Name: "immutability-test", Namespace: "default"}, fresh); err != nil { | ||
| return err | ||
| } | ||
| updated := fresh.DeepCopy() | ||
| updated.Spec.ClusterCIDR = "10.128.0.0/14" | ||
| return k8sClient.Update(ctx, updated) | ||
| }, time.Second*5, time.Millisecond*100).Should(Succeed()) |
There was a problem hiding this comment.
Immutability expectation does not match “immutable after creation”.
These blocks currently expect setting clusterCIDR/serviceCIDR after initial create to succeed, which conflicts with the stated immutability-after-create contract.
✏️ Suggested assertion update
- }, time.Second*5, time.Millisecond*100).Should(Succeed())
+ }, time.Second*5, time.Millisecond*100).Should(MatchError(ContainSubstring("clusterCIDR is immutable")))
@@
- }, time.Second*5, time.Millisecond*100).Should(Succeed())
+ }, time.Second*5, time.Millisecond*100).Should(MatchError(ContainSubstring("serviceCIDR is immutable")))Also applies to: 777-785
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/controller/dpfhcpprovisioner_crd_test.go` around lines 753 - 761,
The immutability test currently tries to update
DPFHCPProvisioner.Spec.ClusterCIDR (and similarly ServiceCIDR) after creation
and asserts success; change these test blocks to assert that the update is
rejected instead: fetch the resource (DPFHCPProvisioner), modify ClusterCIDR
(and in the other block ServiceCIDR) on the DeepCopy, call k8sClient.Update(ctx,
updated) and assert that it returns an error (e.g., ShouldNot(Succeed()) or
Should(MatchError(...)) as appropriate) to reflect the “immutable after
creation” contract enforced by the controller.
Add configurable clusterCIDR and serviceCIDR for HostedCluster networking
Allow users to override the default pod CIDR (10.132.0.0/14) and service CIDR (172.31.0.0/16) via optional, immutable fields on the DPFHCPProvisioner spec. Both fields include CEL validation to ensure valid CIDR format and immutability after creation.
Made-with: Cursor
Summary by CodeRabbit
New Features
ClusterCIDRandServiceCIDRfields to provisioner specifications for custom pod and service IP allocation.Tests