Skip to content

Comments

config: Split KubeMacPool monitoring manifests into separate YAML#598

Merged
kubevirt-bot merged 2 commits intok8snetworkplumbingwg:mainfrom
RamLavi:split_monitoring_yamls
Feb 11, 2026
Merged

config: Split KubeMacPool monitoring manifests into separate YAML#598
kubevirt-bot merged 2 commits intok8snetworkplumbingwg:mainfrom
RamLavi:split_monitoring_yamls

Conversation

@RamLavi
Copy link
Member

@RamLavi RamLavi commented Feb 10, 2026

What this PR does / why we need it:
Right now kubemacpool deploys objects the require prometheus to be deployed on the cluster.
This should not be a tight dependency.
This PR splits monitoring related manifests so Prometheus-operator resources can be opted in independently of core installation.

This does not change current deployment logic, it just separates the manifests so that users could deploy kubemcapool on clusters where prometheus is not installed.

Special notes for your reviewer:

Release note:

NONE

@gemini-code-assist
Copy link

Summary of Changes

Hello @RamLavi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the flexibility of KubeMacPool deployments by separating its core Kubernetes manifests from those specifically designed for Prometheus monitoring. This architectural change allows users to install KubeMacPool without a hard dependency on Prometheus-operator, catering to environments where Prometheus might not be installed or is managed differently. The modification primarily involves reorganizing existing manifest files and updating build scripts to reflect this new modular structure, ensuring that the core functionality of KubeMacPool remains unaffected while improving deployment options.

Highlights

  • Decoupled Monitoring Manifests: The core KubeMacPool deployment manifests have been separated from the Prometheus monitoring-related manifests. This allows users to deploy KubeMacPool without requiring Prometheus-operator resources to be present on the cluster.
  • Makefile Updates: The Makefile has been updated to generate and apply kubemacpool-monitoring.yaml alongside the main kubemacpool.yaml for deploy, deploy-test, generate-deploy, generate-test, and generate-external targets.
  • Manifest Refactoring: Prometheus-related Kubernetes resources (Role, RoleBinding, Service, PrometheusRule, ServiceMonitor) were extracted from config/release/kubemacpool.yaml and config/test/kubemacpool.yaml.
  • New Monitoring Kustomization: A new config/monitoring directory was introduced, and its kustomization.yaml now explicitly defines the kubemacpool-system namespace and kubemacpool- name prefix for monitoring resources.
  • Documentation Update: The README.md was updated with a note explaining the split of monitoring manifests and advising to apply them only when Prometheus Operator CRDs are available.
Changelog
  • Makefile
    • Modified the deploy, deploy-test, generate-deploy, generate-test, and generate-external targets to generate and apply separate monitoring manifests.
  • README.md
    • Added a note explaining the separation of core and monitoring manifests and the condition for applying monitoring resources.
  • config/default/kustomization.yaml
    • Removed the direct inclusion of the ../monitoring path, as monitoring manifests are now handled as a separate deployment unit.
  • config/monitoring/kustomization.yaml
    • Added namespace and namePrefix fields to the kustomization file for monitoring resources.
  • config/release/kubemacpool.yaml
    • Removed Prometheus-related Role, RoleBinding, Service (metrics service), PrometheusRule, and ServiceMonitor definitions.
  • config/test/kubemacpool.yaml
    • Removed Prometheus-related Role, RoleBinding, Service (metrics service), PrometheusRule, and ServiceMonitor definitions.
Activity
  • The pull request was opened by RamLavi with a clear description of the intent to split monitoring manifests for independent deployment.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully separates the monitoring-related Kubernetes manifests from the core kubemacpool.yaml file. This is a good change that allows users to optionally install the Prometheus-operator resources. The changes in the Makefile to generate and deploy the new kubemacpool-monitoring.yaml are correct. I've also noticed that the deploy target now correctly uses the release manifest instead of the test manifest, which is a nice fix. I have one suggestion for the Makefile to avoid generating the same monitoring manifest multiple times.

Comment on lines +79 to +90
$(KUSTOMIZE) build config/monitoring > config/release/kubemacpool-monitoring.yaml

generate-test: $(GO) manifests
$(KUSTOMIZE) build config/test > config/test/kubemacpool.yaml
$(KUSTOMIZE) build config/monitoring > config/test/kubemacpool-monitoring.yaml

generate-external: $(GO) manifests
cp -r config/test config/external
cd config/external && \
$(KUSTOMIZE) edit set image quay.io/kubevirt/kubemacpool=$(REGISTRY)/$(IMG)
$(KUSTOMIZE) build config/external > config/external/kubemacpool.yaml
$(KUSTOMIZE) build config/monitoring > config/external/kubemacpool-monitoring.yaml

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The command $(KUSTOMIZE) build config/monitoring is used in generate-deploy, generate-test, and generate-external to create monitoring manifests. Since config/monitoring is a standalone kustomize base, the generated manifest will be identical in all cases. This leads to unnecessary file duplication and repeated work during the build process.

Consider generating the monitoring manifest only once to a single location (e.g., config/kubemacpool-monitoring.yaml), and then have the deploy and deploy-test targets reference that single file. This would make the build process more efficient and the Makefile easier to maintain.

@RamLavi
Copy link
Member Author

RamLavi commented Feb 10, 2026

/retest

@RamLavi RamLavi force-pushed the split_monitoring_yamls branch 3 times, most recently from 72f92b1 to a0573db Compare February 11, 2026 08:43
@RamLavi
Copy link
Member Author

RamLavi commented Feb 11, 2026

Change: deploy kmp monitoring objects only when CRDs installed

README.md Outdated

**Note:** For VirtualMachines, Kubemacpool supports primary interface MAC address allocation for [masquerade](https://kubevirt.io/user-guide/virtual_machines/interfaces_and_networks/#masquerade) binding mechanism.

**Monitoring manifests:** The release manifests are split into `kubemacpool.yaml` (core) and `kubemacpool-monitoring.yaml` (Prometheus Operator resources like `ServiceMonitor`/`PrometheusRule`). Apply the monitoring manifest only when those CRDs are available.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:
I would reword this section and describe it as the project support Prometheus support.
And to integrate with Prometheus say install the mentioned manifests.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE. PTAL

Makefile Outdated
Comment on lines 70 to 71
$(KUBECTL) apply -f config/release/kubemacpool.yaml
$(KUBECTL) apply -f config/release/kubemacpool-monitoring.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously deploy used the manifest in config/test, why do we use the one from release now?
Dont we need Prometheus presense check here as well?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahh good catch!
Actually due to the changes we did on KMP, we might not need the separate tests config, but we should do that on separate context..

Right now kubemacpool deploys objects the require prometheus to be
deployed on the cluster.
This should not be a tight dependency.
This commit splits monitoring related manifests so Prometheus-operator
resources can be opted in independently of core installation

This does not change current deployment logic, it just separates the
manifests so that users could deploy kubemcapool on clusters where
prometheus is not installed.

Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Ram Lavi <ralavi@redhat.com>
Signed-off-by: Ram Lavi <ralavi@redhat.com>
@RamLavi RamLavi force-pushed the split_monitoring_yamls branch from a0573db to 1ad32f9 Compare February 11, 2026 10:11
@RamLavi
Copy link
Member Author

RamLavi commented Feb 11, 2026

Change: address @ormergi 's review

@ormergi
Copy link
Collaborator

ormergi commented Feb 11, 2026

/lgtm

@RamLavi
Copy link
Member Author

RamLavi commented Feb 11, 2026

/approve

@kubevirt-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RamLavi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot merged commit cf11f30 into k8snetworkplumbingwg:main Feb 11, 2026
5 checks passed
RamLavi added a commit to kubevirt/cluster-network-addons-operator that referenced this pull request Feb 11, 2026
Upstream kubemacpool added monitoring infrastructure [0][1][2].
- Adding the added objects, with configurable params that will be
rendered on runtime by CNAO.
- wrapping these objects by another param MonitoringAvailable that will
be also rendered on realtime, so these objects will be deployed only
then prometheus is installed on the cluster.

[0] k8snetworkplumbingwg/kubemacpool#596
[1] k8snetworkplumbingwg/kubemacpool#587
[2] k8snetworkplumbingwg/kubemacpool#598

Assited-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Ram Lavi <ralavi@redhat.com>
kubevirt-bot added a commit to kubevirt/cluster-network-addons-operator that referenced this pull request Feb 11, 2026
* bump kubemacpool to v0.50.0-18-gcf11f30

Signed-off-by: CNAO Bump Bot <noreply@github.com>

* e2e/kubemacpool: Set monitoring lane env var

Doing so tells CNAO to configure the monitoring components using the
correct prometheus ns

Signed-off-by: Ram Lavi <ralavi@redhat.com>

* components/kubemacpool: Add monitoring objects

Upstream kubemacpool added monitoring infrastructure [0][1][2].
- Adding the added objects, with configurable params that will be
rendered on runtime by CNAO.
- wrapping these objects by another param MonitoringAvailable that will
be also rendered on realtime, so these objects will be deployed only
then prometheus is installed on the cluster.

[0] k8snetworkplumbingwg/kubemacpool#596
[1] k8snetworkplumbingwg/kubemacpool#587
[2] k8snetworkplumbingwg/kubemacpool#598

Assited-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Ram Lavi <ralavi@redhat.com>

* components/kubemacpool: Templatize monitoring params for SA

Upstream kubemacpool added monitoring infrastructure [0]
with hardcoded prometheus-k8s service account and monitoring namespace
in the RoleBinding subjects.
For CNAO, these need to be configurable via template variables,
consistent how CNAO already handles it.

[0] k8snetworkplumbingwg/kubemacpool#596

Assited-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Ram Lavi <ralavi@redhat.com>

---------

Signed-off-by: CNAO Bump Bot <noreply@github.com>
Signed-off-by: Ram Lavi <ralavi@redhat.com>
Co-authored-by: CNAO Bump Bot <noreply@github.com>
Co-authored-by: Ram Lavi <ralavi@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants