Skip to content

Bug: mirror pod (whose ownerReferences.kind = Node) will fail to have correct workloadScanReport #938

@williamshen9999

Description

@williamshen9999

Is there an existing issue for this?

  • I have searched the existing issues

Troubleshooting logs

  • I reviewed the troubleshooting logs and confirmed they contain no sensitive information.

Current Behavior

for mirror pod (whose ownerReferences.kind = Node),

after running workload scan,

  • In its Image resource, status is either like below:
status: {}
status:
  workloadScanReports:
  - name: node-4d004fc8-3d8d-4680-92b3-930b773d51a9
    namespace: kube-system
  • In its workloadscanreport resource, its naming is either like below:
(No workloadscanreport)
node-4d004fc8-3d8d-4680-92b3-930b773d51a9

Expected Behavior

According to supported workload types, mirror pod should be supported for workload scan.

So, after running workload scan, it should:

  • In its Image resource, status should be like:
status:
  workloadScanReports:
  - name: pod-xxxxxx
    namespace: kube-system
  • In its workloadscanreport resource, its naming should be like:
pod-xxxxx

So far the code is:

    if ownerReference == nil {
        return &metav1.OwnerReference{
            APIVersion: "v1",
            Kind:       "Pod",
            Name:       pod.Name,
            UID:        pod.UID,
        }, nil
    }

Maybe it should be modified like:

if ownerReference == nil || ownerReference.Kind == "Node" {

Steps To Reproduce

Version: Backend v0.10.1 + UI ext v0.6.0 + Rancher Manager v2.13

Steps:

  • Enable workload scan for all ns
  • Wait for all workload scan to finish
  • Find some scanned images are from mirror pods (whose ownerReferences.kind = Node)
  • First, take this scanned image "index.docker.io/rancher/hardened-etcd:v3.6.7-k3s1-build20260227" as an example
  • Find out its pod (whose ownerReferences.kind = Node)
root@kw-sbom-ui-rke2-213-0316:~# k get pod etcd-kw-sbom-ui-rke2-213-0316 -n kube-system -o yaml | grep -i image:
    image: index.docker.io/rancher/hardened-etcd:v3.6.7-k3s1-build20260227

root@kw-sbom-ui-rke2-213-0316:~# k get pod etcd-kw-sbom-ui-rke2-213-0316 -n kube-system -o yaml | grep -i ownerR -C30
apiVersion: v1
kind: Pod
metadata:
  annotations:
    etcd.k3s.io/initial: '{"initial-advertise-peer-urls":"https://10.115.50.54:2380/","initial-cluster":"kw-sbom-ui-rke2-213-0316-10f7457f=https://10.115.50.54:2380/","initial-cluster-state":"new"}'
    kubernetes.io/config.hash: cf43983129b6fa6af4006ed386d1896a
    kubernetes.io/config.mirror: cf43983129b6fa6af4006ed386d1896a
    kubernetes.io/config.seen: "2026-03-16T07:43:35.044086732Z"
    kubernetes.io/config.source: file
  creationTimestamp: "2026-03-16T07:44:16Z"
  generation: 1
  labels:
    component: etcd
    tier: control-plane
  name: etcd-kw-sbom-ui-rke2-213-0316
  namespace: kube-system
  ownerReferences:
  - apiVersion: v1
    controller: true
    kind: Node
    name: kw-sbom-ui-rke2-213-0316
    uid: 4d004fc8-3d8d-4680-92b3-930b773d51a9
  resourceVersion: "355"
  • And then find its image resource and then find its status is {} -> Bug!
root@kw-sbom-ui-rke2-213-0316:~# k get image 209d85fa999eef292686dd285f6fcccfc80b01bb137414757a2f0c5ee222beaa -n cattle-sbomscanner-system -o yaml | tail -30
  digest: sha256:8762df533dcd71308773cb9ccf966afe27c3a8b6a16c20adc9889c4191fb615a
  indexDigest: sha256:b4c30144bc1b97bc8e7c110de8c9569c608d7ad5306b36a6bb8e89675120437c
  platform: linux/amd64
  registry: workloadscan-5e9dacd3577946a29e209210ba0c1bd2c9e299e33083865577af5ad129e4e20e
  registryURI: index.docker.io
  repository: rancher/hardened-etcd
  tag: v3.6.7-k3s1-build20260227
kind: Image
layers:
- command: Q09QWSAvdXNyL2xvY2FsL2Jpbi8gL3Vzci9sb2NhbC9iaW4vICMgYnVpbGRraXQ=
  diffID: sha256:99312259ab98b42d69fc7ec41f9a40c5802123ee208fd55726fdad4d5daf5919
  digest: sha256:d8e4e102132a4a1b272a91832aa9d354092e1640339a59089024461392d521c0
metadata:
  creationTimestamp: "2026-03-16T09:50:29Z"
  labels:
    app.kubernetes.io/managed-by: sbomscanner
    app.kubernetes.io/part-of: sbomscanner
    sbomscanner.kubewarden.io/workloadscan: "true"
  name: 209d85fa999eef292686dd285f6fcccfc80b01bb137414757a2f0c5ee222beaa
  namespace: cattle-sbomscanner-system
  ownerReferences:
  - apiVersion: sbomscanner.kubewarden.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Registry
    name: workloadscan-5e9dacd3577946a29e209210ba0c1bd2c9e299e33083865577af5ad129e4e20e
    uid: cf611b5f-8b1b-4709-8552-13c309572e0d
  resourceVersion: "297"
  uid: 5a202021-7280-4c7b-99ab-985926d735ed
status: {}
  • And find "No workloadScanReport" for it
  • Then take another scanned image "index.docker.io/rancher/hardened-kubernetes:v1.34.5-rke2r1-build20260227" as another example
  • Find out its pod (whose ownerReferences.kind = Node)
root@kw-sbom-ui-rke2-213-0316:~# kubectl get pods -n kube-system -l component=kube-apiserver -o yaml | grep "image:"
      image: index.docker.io/rancher/hardened-kubernetes:v1.34.5-rke2r1-build20260227

root@kw-sbom-ui-rke2-213-0316:~# kubectl get pods -n kube-system -l component=kube-apiserver -o yaml | grep -i ownerR -C30
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubernetes.io/config.hash: 2520d194469e5522b64c2ca3d9fa1e38
      kubernetes.io/config.mirror: 2520d194469e5522b64c2ca3d9fa1e38
      kubernetes.io/config.seen: "2026-03-16T07:43:53.350157902Z"
      kubernetes.io/config.source: file
    creationTimestamp: "2026-03-16T07:44:19Z"
    generation: 1
    labels:
      component: kube-apiserver
      tier: control-plane
    name: kube-apiserver-kw-sbom-ui-rke2-213-0316
    namespace: kube-system
    ownerReferences:
    - apiVersion: v1
      controller: true
      kind: Node
  • And then find its image resource and then find its status is below (whose name shows "node-xxx" which is wrong!) -> Bug! :
status:
  workloadScanReports:
  - name: node-4d004fc8-3d8d-4680-92b3-930b773d51a9
    namespace: kube-system
  • And then find its workloadScanReport naming is below -> Bug! (it should be "pod-xxxx"
node-4d004fc8-3d8d-4680-92b3-930b773d51a9

Environment

- OS:
- Architecture:

Anything else?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions