Skip to content

[backport]: manifest servicemonitor cleanup + label selector for controller#44

Merged
zdtsw merged 3 commits intoopendatahub-io:mainfrom
zdtsw-forking:jira_5614
Apr 2, 2026
Merged

[backport]: manifest servicemonitor cleanup + label selector for controller#44
zdtsw merged 3 commits intoopendatahub-io:mainfrom
zdtsw-forking:jira_5614

Conversation

@zdtsw
Copy link
Copy Markdown
Member

@zdtsw zdtsw commented Mar 27, 2026

Description

backport from red-hat-data-services#64

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Chores
    • Updated ServiceMonitor label configuration for improved metrics targeting.
    • Refined controller kustomization configuration.

zdtsw added 2 commits March 27, 2026 20:40
Signed-off-by: Wen Zhou <wenzhou@redhat.com>
(cherry picked from commit f81c34a)
Signed-off-by: Wen Zhou <wenzhou@redhat.com>
(cherry picked from commit ff81652)
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 27, 2026

📝 Walkthrough

Walkthrough

Removed a replacements entry from the kustomization configuration that dynamically copied the controller-manager Deployment's namespace into the ServiceMonitor's spec.namespaceSelector.matchNames field. Simultaneously, updated the ServiceMonitor's label selector to require an additional label match: app.kubernetes.io/name: workload-variant-autoscaler, alongside the existing control-plane: controller-manager requirement. A trailing newline was added to the ServiceMonitor file.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main changes: removing a replacement configuration and updating ServiceMonitor label selectors for the controller-manager.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@config/prometheus/servicemonitor.yaml`:
- Around line 14-17: The ServiceMonitor currently scrapes over plaintext
(scheme: http) and uses port: http which mismatches the target Service
advertising https/8443; update the ServiceMonitor spec to use scheme: https and
the correct port name or number (match the Service's port name/8443 from
config/default/metrics_service.yaml), and add proper TLS settings (tls_config
with certificate authority or set insecureSkipVerify: false) so scraping uses
TLS; ensure the port key (port: <port-name-or-8443>) and scheme: https are the
only changes so Prometheus can successfully and securely scrape the target.
- Around line 9-12: The ServiceMonitor currently only has selector.matchLabels
and is in namespace workload-variant-autoscaler-monitoring while the metrics
Service lives in workload-variant-autoscaler-system, so add a
spec.namespaceSelector with matchNames containing
"workload-variant-autoscaler-system" to allow cross-namespace target discovery;
update the ServiceMonitor resource (the spec.namespaceSelector.matchNames entry)
so the selector.matchLabels (control-plane: controller-manager and
app.kubernetes.io/name: workload-variant-autoscaler) can find the Service in the
other namespace.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 1878cfa3-b670-4529-8a6f-9240a1bf1ef8

📥 Commits

Reviewing files that changed from the base of the PR and between 06862a9 and 7bb6e17.

📒 Files selected for processing (2)
  • config/default/kustomization.yaml
  • config/prometheus/servicemonitor.yaml
💤 Files with no reviewable changes (1)
  • config/default/kustomization.yaml

Comment on lines 9 to +12
selector:
matchLabels:
control-plane: controller-manager
app.kubernetes.io/name: workload-variant-autoscaler
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore cross-namespace target selection (namespaceSelector)

metadata.namespace is workload-variant-autoscaler-monitoring (Line 5), while the metrics Service is in workload-variant-autoscaler-system (config/default/metrics_service.yaml:1-18). Without spec.namespaceSelector.matchNames, this ServiceMonitor won’t discover any targets.

Suggested fix
 spec:
+  namespaceSelector:
+    matchNames:
+      - workload-variant-autoscaler-system
   selector:
     matchLabels:
       control-plane: controller-manager
       app.kubernetes.io/name: workload-variant-autoscaler
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
selector:
matchLabels:
control-plane: controller-manager
app.kubernetes.io/name: workload-variant-autoscaler
spec:
namespaceSelector:
matchNames:
- workload-variant-autoscaler-system
selector:
matchLabels:
control-plane: controller-manager
app.kubernetes.io/name: workload-variant-autoscaler
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config/prometheus/servicemonitor.yaml` around lines 9 - 12, The
ServiceMonitor currently only has selector.matchLabels and is in namespace
workload-variant-autoscaler-monitoring while the metrics Service lives in
workload-variant-autoscaler-system, so add a spec.namespaceSelector with
matchNames containing "workload-variant-autoscaler-system" to allow
cross-namespace target discovery; update the ServiceMonitor resource (the
spec.namespaceSelector.matchNames entry) so the selector.matchLabels
(control-plane: controller-manager and app.kubernetes.io/name:
workload-variant-autoscaler) can find the Service in the other namespace.

Comment on lines 14 to +17
- port: http
scheme: http
interval: 30s
path: /metrics No newline at end of file
path: /metrics
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Major security/correctness gap: plaintext endpoint config (CWE-319) and likely port mismatch

Severity: Major. Using scheme: http on Line 15 allows cleartext metrics transport (CWE-319). In-cluster traffic inspection on compromised nodes can expose telemetry data. Also, the target metrics Service advertises https/8443 (config/default/metrics_service.yaml:1-18), so port: http may fail scraping.

Suggested fix
   endpoints:
-  - port: http
-    scheme: http
+  - port: https
+    scheme: https
     interval: 30s
-    path: /metrics 
+    path: /metrics
+    tlsConfig:
+      insecureSkipVerify: true
+    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token

As per coding guidelines, **: REVIEW PRIORITIES: 1. Security vulnerabilities (provide severity, exploit scenario, and remediation code).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- port: http
scheme: http
interval: 30s
path: /metrics
\ No newline at end of file
path: /metrics
- port: https
scheme: https
interval: 30s
path: /metrics
tlsConfig:
insecureSkipVerify: true
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config/prometheus/servicemonitor.yaml` around lines 14 - 17, The
ServiceMonitor currently scrapes over plaintext (scheme: http) and uses port:
http which mismatches the target Service advertising https/8443; update the
ServiceMonitor spec to use scheme: https and the correct port name or number
(match the Service's port name/8443 from config/default/metrics_service.yaml),
and add proper TLS settings (tls_config with certificate authority or set
insecureSkipVerify: false) so scraping uses TLS; ensure the port key (port:
<port-name-or-8443>) and scheme: https are the only changes so Prometheus can
successfully and securely scrape the target.

@zdtsw zdtsw requested a review from anishasthana March 29, 2026 15:44
Copy link
Copy Markdown
Member

@Gregory-Pereira Gregory-Pereira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Signed-off-by: Wen Zhou <wenzhou@redhat.com>
@zdtsw zdtsw merged commit 242b310 into opendatahub-io:main Apr 2, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants