Skip to content

Remove Node viewing ClusterRole from E2E-on-OCP workflow#312

Closed
MikeSpreitzer wants to merge 1 commit intollm-d-incubation:mainfrom
MikeSpreitzer:ocp-sans-node-view
Closed

Remove Node viewing ClusterRole from E2E-on-OCP workflow#312
MikeSpreitzer wants to merge 1 commit intollm-d-incubation:mainfrom
MikeSpreitzer:ocp-sans-node-view

Conversation

@MikeSpreitzer
Copy link
Copy Markdown
Collaborator

Modify the GHA workflow that does E2E test in our shared OpenShift cluster so that it does not create, delete, nor use a ClusterRole for reading Nodes.

This is, at the moment, a test; not to be merged as is.

Modify the GHA workflow that does E2E test in our shared OpenShift
cluster so that it does not create, delete, nor use a ClusterRole for
reading Nodes.

Signed-off-by: Mike Spreitzer <[email protected]>
Copilot AI review requested due to automatic review settings March 2, 2026 16:23
@MikeSpreitzer
Copy link
Copy Markdown
Collaborator Author

Oops, wrong fork.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the OpenShift E2E GitHub Actions workflow to stop creating/deleting a dedicated ClusterRole for Node read access, and to stop passing that ClusterRole name into the Helm install.

Changes:

  • Removed workflow steps that create/delete the fma-node-viewer ClusterRole.
  • Removed the Helm --set global.nodeViewClusterRole=... override so the chart no longer creates the associated ClusterRoleBinding.
  • Removed cluster-scoped cleanup commands related to the node-view RBAC resources.
Comments suppressed due to low confidence (1)

.github/workflows/ci-e2e-openshift.yaml:498

  • Removing the global.nodeViewClusterRole Helm value means the fma-controllers chart will no longer create the ClusterRoleBinding that grants the controller ServiceAccount get/list/watch on Nodes (see chart templates/rbac/role-binding.yaml conditional on that value). The dual-pods controller registers a Nodes informer/lister and uses it during reconciliation (e.g., it Get()s the Node and returns early when it is missing), so in a default install this will prevent launcher/providing Pods from being created and will likely break this workflow’s E2E assertions. If the goal is to avoid creating/deleting ClusterRoles in CI, consider instead binding to an existing, pre-provisioned ClusterRole that grants minimal Node read access, or update the controller/chart so Node access is not required for these E2E paths.
          helm upgrade --install "$FMA_RELEASE_NAME" charts/fma-controllers \
            -n "$FMA_NAMESPACE" \
            --set global.imageRegistry="${CONTROLLER_IMAGE%/dual-pods-controller:*}" \
            --set global.imageTag="${CONTROLLER_IMAGE##*:}" \
            --set dualPodsController.sleeperLimit=2 \
            --set global.local=false \
            --set dualPodsController.debugAcceleratorMemory=false \
            --set launcherPopulator.enabled=false

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants