Skip to content

Commit db0c221

Browse files
Adjust Node viewing ClusterRole in E2E-on-OCP workflow (#313)
* Adjust Node viewing ClusterRole in E2E-on-OCP workflow Modify the GHA workflow that does E2E test in our shared OpenShift cluster so that the name of the ClusterRole includes `$FMA_RELEASE_NAME`, which in turn includes the workflow run ID. Document solution for reading Node objects Also a bit of tidying up from other recent cluster-sharing PRs. Signed-off-by: Mike Spreitzer <[email protected]> * Added clarification about ClusterRole name Signed-off-by: Mike Spreitzer <[email protected]> --------- Signed-off-by: Mike Spreitzer <[email protected]>
1 parent 7eb291a commit db0c221

2 files changed

Lines changed: 27 additions & 15 deletions

File tree

.github/workflows/ci-e2e-openshift.yaml

Lines changed: 8 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -395,8 +395,8 @@ jobs:
395395
396396
# Clean up cluster-scoped resources from previous runs
397397
echo "Cleaning up cluster-scoped resources..."
398-
kubectl delete clusterrolebinding -l app.kubernetes.io/name=fma-controllers --ignore-not-found 2>/dev/null || true
399-
kubectl delete clusterrole fma-node-viewer --ignore-not-found || true
398+
kubectl delete clusterrole "${FMA_RELEASE_NAME}-node-view" --ignore-not-found || true
399+
kubectl delete clusterrolebinding "${FMA_RELEASE_NAME}-node-view" --ignore-not-found || true
400400
401401
echo "Cleanup complete"
402402
@@ -462,8 +462,8 @@ jobs:
462462
463463
- name: Create node-viewer ClusterRole
464464
run: |
465-
echo "Creating node-viewer ClusterRole..."
466-
kubectl create clusterrole fma-node-viewer --verb=get,list,watch --resource=nodes
465+
echo "Creating ClusterRole ${FMA_RELEASE_NAME}-node-view..."
466+
kubectl create clusterrole ${FMA_RELEASE_NAME}-node-view --verb=get,list,watch --resource=nodes
467467
echo "ClusterRole created"
468468
469469
- name: Detect ValidatingAdmissionPolicy support
@@ -499,7 +499,7 @@ jobs:
499499
-n "$FMA_NAMESPACE" \
500500
--set global.imageRegistry="${CONTROLLER_IMAGE%/dual-pods-controller:*}" \
501501
--set global.imageTag="${CONTROLLER_IMAGE##*:}" \
502-
--set global.nodeViewClusterRole=fma-node-viewer \
502+
--set global.nodeViewClusterRole=${FMA_RELEASE_NAME}-node-view \
503503
--set dualPodsController.sleeperLimit=2 \
504504
--set global.local=false \
505505
--set dualPodsController.debugAcceleratorMemory=false \
@@ -916,14 +916,9 @@ jobs:
916916
kubectl delete namespace "$FMA_NAMESPACE" \
917917
--ignore-not-found --timeout=120s || true
918918
919-
# Delete CRDs
920-
# TODO: Implement safe CRD lifecycle management for tests (e.g., handle shared clusters,
921-
# concurrent test runs, and version upgrades/downgrades) before enabling CRD deletion.
922-
# kubectl delete -f config/crd/ --ignore-not-found || true
923-
924-
# Delete cluster-scoped resources
925-
kubectl delete clusterrole fma-node-viewer --ignore-not-found || true
926-
kubectl delete clusterrolebinding "$FMA_RELEASE_NAME-node-view" --ignore-not-found || true
919+
# Delete cluster-scoped stuff for reading Node objects
920+
kubectl delete clusterrole "${FMA_RELEASE_NAME}-node-view" --ignore-not-found || true
921+
kubectl delete clusterrolebinding "${FMA_RELEASE_NAME}-node-view" --ignore-not-found || true
927922
928923
echo "Cleanup complete"
929924

docs/cluster-sharing.md

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,6 @@ object.
2323
- A ClusterRoleBinding that binds the node-reading ClusterRole to an
2424
FMA ServiceAccount.
2525

26-
- A ClusterRoleBinding that binds ClusterRole `view` to an FMA ServiceAccount.
27-
2826
- A Namespace that FMA is installed in.
2927

3028
## Solution for the CustomResourceDefinition Objects
@@ -138,3 +136,22 @@ object.
138136
ValidatingAdmissionPolicy[Binding] objects.
139137

140138
- The Helm chart does nothing about these policy objects.
139+
140+
## Solution for reading Node objects
141+
142+
- The Helm chart can optionally create a ClusterRoleBinding for a
143+
ClusterRole with a given name.
144+
145+
- The Helm chart does nothing about creating the ClusterRole for
146+
reading Node objects.
147+
148+
- The admin of a shared cluster has several choices about what to
149+
maintain on behalf of users vs. authorize users to do.
150+
151+
- For the GHA workflow that does E2E test in a particular shared
152+
OpenShift cluster, the solution is as follows. The workflow creates,
153+
uses, and deletes a ClusterRole that has the workflow run ID in its
154+
name. The name of this ClusterRole is given to the Helm chart, which
155+
makes a ClusterRoleBinding for it. The workflow deletes the Helm
156+
chart instance and then, for good measure, also deletes the
157+
ClusterRoleBinding.

0 commit comments

Comments
 (0)