-
Notifications
You must be signed in to change notification settings - Fork 253
Description
Is there an existing issue for this?
- I have searched the existing issues
Current Behavior
We are deploying keycloak operator and keycloak to rancher with fleet, and the fleet never puts the bundle/resource state to ready, because it encounters with the label app.kubernetes.io/managed-by being set on the operator to quarkus, and fleet wants to set it to helm.
It always stayes in the Modified state on the cluster:
Modified(1) [Cluster fleet-default/test-kiaf]; deployment.apps test-kiaf-keycloak/keycloak-operator modified {"metadata":{"labels":{"app.kubernetes.io/managed-by":"quarkus"}}}; service.v1 test-kiaf-keycloak/keycloak-operator modified {"metadata":{"labels":{"app.kubernetes.io/managed-by":"quarkus"}}}; serviceaccount.v1 test-kiaf-keycloak/keycloak-operator modified {"metadata":{"labels":{"app.kubernetes.io/managed-by":"quarkus"}}}
Operator has the next labels:
Name: keycloak-operator-5d6bb9dc49-jlj8r
Namespace: test-kiaf-keycloak
Priority: 0
Service Account: keycloak-operator
Node: test-kiafworker03/192.168.238.115
Start Time: Tue, 25 Nov 2025 15:46:59 +0100
Labels: app.kubernetes.io/managed-by=quarkus
app.kubernetes.io/name=keycloak-operator
app.kubernetes.io/version=26.2.5
pod-template-hash=5d6bb9dc49
Annotations: app.quarkus.io/build-timestamp: 2025-05-28 - 06:54:27 +0000
app.quarkus.io/quarkus-version: 3.20.1
app.quarkus.io/vcs-uri: https://github.com/keycloak/keycloak.git
cni.projectcalico.org/containerID: fc18b53ec55e6cdc24f527e88e0f3d711b13166dd28bd7940b57c06a150ebd7b
cni.projectcalico.org/podIP: 172.18.42.158/32
cni.projectcalico.org/podIPs: 172.18.42.158/32
Status: Running
Expected Behavior
Should be go to ready as all the others.
Steps To Reproduce
Deploy keycloak operator (https://www.keycloak.org/operator/installation) with fleet to rancher rke2 downstream clusters via manifest.
Environment
- Architecture: amd64
- RancherVersion: v2.12.2
- Fleet Version: v0.13.2
- Cluster:
- Provider: RKE2
- Options: 3 nodes (3 master 3 worker), one external haproxy lb, HPE CSI - on direct FC
- Kubernetes Version: v1.32.6+rke2r1Logs
Anything else?
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status