Skip to content

Conversation

@Yu-Jack
Copy link
Contributor

@Yu-Jack Yu-Jack commented Jan 26, 2026

Problem:

We only support filter and auto-provision from pod env, which isn't easy to use.

Solution:

I added config map structure and the following features:

  1. Adde device filter, which we didn't have before.

  2. Loading config map to filter the blockdevice and select candidates for auto provision.

    1. Add creating a default config map resource under pkg/data. (reason)

    2. Set exclude filter and auto provision every time when scanner starts. It's useful when config map is changed. But, I don't think we need to notify scanner to scan again when the config map is changed. Just one way direction is enough.

  3. Fallback mechanism. If config map doesn't exist, it directly uses the env variables. This is just for the upgrade if the upgrade fails, which causes some unexpected behaviors.

  4. Added debug message like this. We have some default settings so it's better to have a clear message to check.

    vel=debug msg="Final filter configuration (including defaults):"                            │
    vel=debug msg="  Exclude Filters (6 total):"                                                │
    vel=debug msg="    [0] driver type filter"                                                  │
    vel=debug msg="        Disk: exclude: non-HDD/SSD drives"                                   │
    vel=debug msg="        Part: exclude: non-HDD/SSD drives"                                   │
    vel=debug msg="    [1] device path filter"                                                  │
    vel=debug msg="        Disk: device paths: [/dev/sdd]"                                      │
    vel=debug msg="        Part: device paths: [/dev/sdd]"                                      │
    vel=debug msg="    [2] vendor filter"                                                       │
    vel=debug msg="        Disk: vendors: [longhorn, thisisaexample, QEMU, harvester1, longhorn, SPDK bdev Controller]" │
    vel=debug msg="        Part: N/A"                                                           │
    vel=debug msg="    [3] path filter"                                                         │
    vel=debug msg="        Disk: mount paths: [/]"                                              │
    vel=debug msg="        Part: mount paths: [/]"                                              │
    vel=debug msg="    [4] label filter"                                                        │
    vel=debug msg="        Disk: fs labels: [COS_*,  HARV_*]"                                   │
    vel=debug msg="        Part: fs labels: [COS_*,  HARV_*]"                                   │
    vel=debug msg="    [5] parttype filter"                                                     │
    vel=debug msg="        Disk: part types: [21686148-6449-6E6F-744E-656564454649]"            │
    vel=debug msg="        Part: part types: [21686148-6449-6E6F-744E-656564454649]"            │
    vel=debug msg="  Auto-Provision Filters (1 total):"                                         │
    vel=debug msg="    [0] device path filter"                                                  │
    vel=debug msg="        Disk: device paths: [/dev/sdz]"                                      │
    vel=debug msg="        Part: device paths: [/dev/sdz]"

p.s. I don't change too much logic inside the scanner, I'd like to keep it simple, so I just reuse it and change the interface of accepting the filters from the outside.

Out of the scope

  1. Auto provision for longhorn v2 and LVM. This PR doesn't include that and only focuses on filter list. However, in order to support that in the future, the config map still accepts different provisions and params.

Related Issue:
harvester/harvester#5059

Test plan:

This is an example testing. There are too many test cases. I'll cover as many as possible in our unit tests.

apiVersion: v1
kind: ConfigMap
metadata:
  name: harvester-node-disk-manager
  namespace: harvester-system
data:
  autoprovision.yaml: |
    - hostname: "*"
      devices:
        - "/dev/sdc"
        - "/dev/sdd"
  filters.yaml: |
    - hostname: "*"
      excludeLabels: ["COS_*, HARV_*"]
      excludeVendors: ["longhorn", "thisisaexample"]
      excludeDevices: ["/dev/sdd"]
    - hostname: "harvester1"
      excludeVendors: ["harvester1"]
    - hostname: "harvester2"
      excludeVendors: ["harvester2"]

harvester1 node

  • excludeLabels: ["COS_*, HARV_*"]
  • excludeVendors: ["longhorn", "thisisaexample", "harvester1"]
  • excludeDevices: ["/dev/sdd"]
  • autoprovision: ["/dev/sdc", "/dev/sdd"]

harvester2 node

  • excludeLabels: ["COS_*, HARV_*"]
  • excludeVendors: ["longhorn", "thisisaexample", "harvester2"]
  • excludeDevices: ["/dev/sdd"]
  • autoprovision: ["/dev/sdc", "/dev/sdd"]

In the end, /dev/sdd is skipped because the of exclusion although it's in auto-provision list.

@Yu-Jack Yu-Jack self-assigned this Jan 26, 2026
@Yu-Jack Yu-Jack changed the title Harv 5059 feat: support filter list Jan 26, 2026
@Yu-Jack Yu-Jack marked this pull request as ready for review January 27, 2026 08:09
@Yu-Jack Yu-Jack requested review from a team, Vicente-Cheng and tserong as code owners January 27, 2026 08:09
Copilot AI review requested due to automatic review settings January 27, 2026 08:09
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds ConfigMap-backed filter/autoprovision configuration (with fallback to env vars) and exposes filter “Details” for debug logging, enabling easier per-node disk selection without relying solely on pod env configuration.

Changes:

  • Introduces ConfigMapLoader to load/merge filters.yaml and autoprovision.yaml configs (with hostname matching).
  • Updates scanner to reload config on each scan and emit detailed debug output for active filters.
  • Adds Helm chart ConfigMap template/values and unit tests for loader behavior.

Reviewed changes

Copilot reviewed 16 out of 303 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
pkg/utils/fake/configmap.go Adds a fake ConfigMap client adapter for tests.
pkg/filter/vendor_filter.go Adds Details() output for vendor filters.
pkg/filter/path_filter.go Adds Details() output for mount path filters.
pkg/filter/part_type_filter.go Adds Details() output for partition type filters.
pkg/filter/label_filter.go Adds Details() output for label filters.
pkg/filter/filter.go Extends filter interfaces with Details() and adds device-path exclusion support.
pkg/filter/drive_type_filter.go Adds Details() output for drive type filters.
pkg/filter/device_path_filter.go Adds Details() output for device path filters.
pkg/filter/configmap_loader.go Implements ConfigMap loading/parsing/merge logic for filters and autoprovision.
pkg/filter/configmap_loader_test.go Adds unit tests for ConfigMapLoader behavior across multiple scenarios.
pkg/controller/blockdevice/scanner.go Reloads config each scan and logs final filter configuration; threads ctx through scan.
pkg/controller/blockdevice/controller.go Updates scanner start call to pass context.
cmd/node-disk-manager/main.go Wires ConfigMapLoader into scanner creation and removes direct env-filter wiring.
deploy/charts/harvester-node-disk-manager/values.yaml Adds chart values structure for ConfigMap-backed configuration.
deploy/charts/harvester-node-disk-manager/templates/configmap.yaml Adds chart template to render filters/autoprovision data into a ConfigMap.
go.mod Adds direct dependency on gopkg.in/yaml.v3 for YAML parsing.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +17 to +37
func addNDMConfigMap(clientset *kubernetes.Clientset) error {
configMap := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: ConfigMapName,
Namespace: ConfigMapNamespace,
},
Data: map[string]string{
"filters.yaml": `- hostname: "*"
excludeLabels: ["COS_*", "HARV_*"]
`,
"autoprovision.yaml": "",
},
}

_, err := clientset.CoreV1().ConfigMaps(ConfigMapNamespace).Create(context.TODO(), configMap, metav1.CreateOptions{})
if err != nil && !apierrors.IsAlreadyExists(err) {
return err
}

return nil
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason why I put it here is about managedChart issue (harvester/harvester#9522 (comment)).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant