Is there an existing issue for this?
Troubleshooting logs
Current Behavior
None of sbomscanner pods is Running in virtual cluster.
Expected Behavior
All sbomscanner pods should be Running.
Steps To Reproduce
- Create a fresh new VM (ex: cpu: 8 core; memory: 16GB; disk: 80GB) as a host
- Install rke2 on this host
- Set up storageClass for this host
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl annotate sc local-path storageclass.kubernetes.io/is-default-class="true" --overwrite
kubectl get sc # should show local-path as the default
- Install k3k controller via k3k doc
- Install k3kcli via doc
- Create a virtual cluster by k3kcli
root@kw-sbom-backend-rke2-0420-k3k:~# k3kcli cluster create mycluster
INFO[0000] Creating cluster 'mycluster' in namespace 'k3k-mycluster'
INFO[0000] Cluster 'mycluster' already exists
INFO[0001] Cluster details:
Mode: shared
Servers: 1
Version: v1.34.6 (Host: v1.34.6)
Persistence:
Type: dynamic
Size: 2G
INFO[0001] Waiting for cluster to be available..
INFO[0036] Extracting Kubeconfig for 'mycluster' cluster
INFO[0036] certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1776659560: notBefore=2026-04-20 03:32:40 +0000 UTC notAfter=2027-04-20 04:33:52 +0000 UTC
INFO[0036] You can start using the cluster with:
export KUBECONFIG=/root/k3k-mycluster-mycluster-kubeconfig.yaml
kubectl cluster-info
- Now, use that virtual cluster's kubeconfig and let's start to set up for that virtual cluster
export KUBECONFIG=/root/k3k-mycluster-mycluster-kubeconfig.yaml
- Set up storageClass for this virtual cluster
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl annotate sc local-path storageclass.kubernetes.io/is-default-class="true" --overwrite
kubectl get sc # should show local-path as the default
- Follow quickstart to install cert-manager and cnpg in this virtual cluster and make sure those pods are Running
- Install sbomscanner in this virtual cluster via command below:
helm repo add kubewarden https://charts.kubewarden.io/
helm repo update
helm install sbomscanner kubewarden/sbomscanner \
--set=controller.logLevel=debug \
--set=storage.logLevel=debug \
--set=worker.logLevel=debug \
--namespace sbomscanner \
--create-namespace \
--debug \
--wait
- Find all sbomscanner pods are not Running
root@kw-sbom-backend-rke2-0420-k3k:~# k get pod -n sbomscanner
NAME READY STATUS RESTARTS AGE
sbomscanner-cnpg-cluster-1-initdb-48qr9 0/1 Init:0/1 0 7m42s
sbomscanner-controller-66f85dcc69-dxrnb 0/1 Init:0/1 2 (99s ago) 7m43s
sbomscanner-controller-66f85dcc69-pl222 0/1 Init:0/1 2 (99s ago) 7m43s
sbomscanner-controller-66f85dcc69-tm5v2 0/1 Init:0/1 2 (98s ago) 7m43s
sbomscanner-nats-0 0/2 ContainerCreating 0 7m43s
sbomscanner-nats-1 0/2 ContainerCreating 0 7m43s
sbomscanner-nats-2 0/2 Pending 0 7m43s
sbomscanner-storage-86cbc74dcd-frrzv 0/1 Init:0/1 2 (100s ago) 7m43s
sbomscanner-storage-86cbc74dcd-pcxbw 0/1 Init:0/1 2 (100s ago) 7m43s
sbomscanner-storage-86cbc74dcd-rqg8n 0/1 Init:0/1 2 (100s ago) 7m43s
sbomscanner-worker-65b674fc86-k6fdk 0/1 Init:0/1 2 (98s ago) 7m43s
sbomscanner-worker-65b674fc86-m4gx6 0/1 Init:0/1 2 (98s ago) 7m43s
sbomscanner-worker-65b674fc86-sxm7z 0/1 Init:0/1 2 (98s ago) 7m43s
The collected log is below.
sbomscanner-debug-20260423_031019.tar.gz
Environment
Anything else?
I think my steps should be correct,
Could you help to check? Thx.
Is there an existing issue for this?
Troubleshooting logs
Current Behavior
None of sbomscanner pods is Running in virtual cluster.
Expected Behavior
All sbomscanner pods should be Running.
Steps To Reproduce
The collected log is below.
sbomscanner-debug-20260423_031019.tar.gz
Environment
Anything else?
I think my steps should be correct,
Could you help to check? Thx.