-
-
Notifications
You must be signed in to change notification settings - Fork 624
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Chart Name
CI
Operating System
Talos-OS 1.9
Deployment Method
Helm
Chart Version
N/A
Kubernetes Events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned multus-cni-j5ms4frd8z/multus-cni-j5ms4frd8z-whs9k to k3d-k3s-default-server-0
Normal Pulling 3s kubelet Pulling image "ghcr.io/k8snetworkplumbingwg/multus-cni:v4.2.3-thick"Chart Logs
Logs
Installing charts...
Version increment checking disabled.
------------------------------------------------------------------------------------------------------------------------
Charts to be processed:
------------------------------------------------------------------------------------------------------------------------
multus-cni => (version: "0.1.0", path: "charts/incubator/multus-cni")
------------------------------------------------------------------------------------------------------------------------
"jetstack" already exists with the same configuration, skipping
"grafana" already exists with the same configuration, skipping
"cnpg" already exists with the same configuration, skipping
"metallb" already exists with the same configuration, skipping
"openebs" already exists with the same configuration, skipping
"csi-driver-smb" already exists with the same configuration, skipping
"csi-driver-nfs" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "csi-driver-nfs" chart repository
...Successfully got an update from the "csi-driver-smb" chart repository
...Successfully got an update from the "metallb" chart repository
...Successfully got an update from the "openebs" chart repository
...Successfully got an update from the "cnpg" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "grafana" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading common from repo oci://oci.trueforge.org/truecharts
Deleting outdated charts
Pulled: oci.trueforge.org/truecharts/common:28.26.3
Digest: sha256:fdede3408709e942741cd80b4aa94493ba598d2529a2ee4c64a76244fe955828
Installing chart "multus-cni => (version: \"0.1.0\", path: \"charts/incubator/multus-cni\")"...
Creating namespace "multus-cni-j5ms4frd8z"...
namespace/multus-cni-j5ms4frd8z created
NAME: multus-cni-j5ms4frd8z
LAST DEPLOYED: Sun Jan 4 00:10:06 2026
NAMESPACE: multus-cni-j5ms4frd8z
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
# Thank you for installing multus-cni by TrueCharts.
## Connecting externally
You can use this Chart by opening the following links in your browser:
- http://127.0.0.1:8080
## Dependencies for multus-cni
- Chart: oci://oci.trueforge.org/truecharts/common
Version: 28.26.3
## Connecting Internally
You can reach this chart inside your cluster, using the following service URLS:- multus-cni-j5ms4frd8z.multus-cni-j5ms4frd8z.svc.cluster.local:%!s(float64=8080)
## Sources for multus-cni
- https://github.com/k8snetworkplumbingwg/multus-cni
- https://github.com/trueforge-org/truecharts/tree/master/charts/incubator/multus-cni
See more for **multus-cni** at (https://truecharts.org/charts/incubator/multus-cni)
## Documentation
Please check out the TrueCharts documentation on:
https://truecharts.org
OpenSource can only exist with your help, please consider supporting TrueCharts:
https://trueforge.org/sponsor
NAME: multus-cni-j5ms4frd8z
LAST DEPLOYED: Sun Jan 4 00:10:06 2026
NAMESPACE: multus-cni-j5ms4frd8z
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
# Thank you for installing multus-cni by TrueCharts.
## Connecting externally
You can use this Chart by opening the following links in your browser:
- http://127.0.0.1:8080
## Dependencies for multus-cni
- Chart: oci://oci.trueforge.org/truecharts/common
Version: 28.26.3
## Connecting Internally
You can reach this chart inside your cluster, using the following service URLS:- multus-cni-j5ms4frd8z.multus-cni-j5ms4frd8z.svc.cluster.local:%!s(float64=8080)
## Sources for multus-cni
- https://github.com/k8snetworkplumbingwg/multus-cni
- https://github.com/trueforge-org/truecharts/tree/master/charts/incubator/multus-cni
See more for **multus-cni** at (https://truecharts.org/charts/incubator/multus-cni)
## Documentation
Please check out the TrueCharts documentation on:
https://truecharts.org
OpenSource can only exist with your help, please consider supporting TrueCharts:
https://trueforge.org/sponsor
========================================================================================================================
........................................................................................................................
==> Events of namespace multus-cni-j5ms4frd8z
........................................................................................................................
LAST SEEN TYPE REASON OBJECT SUBOBJECT SOURCE MESSAGE FIRST SEEN COUNT NAME
2s Normal Scheduled pod/multus-cni-j5ms4frd8z-whs9k default-scheduler, default-scheduler-k3d-k3s-default-server-0 Successfully assigned multus-cni-j5ms4frd8z/multus-cni-j5ms4frd8z-whs9k to k3d-k3s-default-server-0 2s 1 multus-cni-j5ms4frd8z-whs9k.18875e9cf767179c
2s Normal Pulling pod/multus-cni-j5ms4frd8z-whs9k spec.initContainers{multus-cni-j5ms4frd8z-init-multus-plugin-installer} kubelet, k3d-k3s-default-server-0 Pulling image "ghcr.io/k8snetworkplumbingwg/multus-cni:v4.2.3-thick" 2s 1 multus-cni-j5ms4frd8z-whs9k.18875e9d0d0901a0
2s Normal SuccessfulCreate daemonset/multus-cni-j5ms4frd8z daemonset-controller Created pod: multus-cni-j5ms4frd8z-whs9k 2s 1 multus-cni-j5ms4frd8z.18875e9cf6a76451
........................................................................................................................
<== Events of namespace multus-cni-j5ms4frd8z
........................................................................................................................
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
==> Description of pod multus-cni-j5ms4frd8z-whs9k
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Name: multus-cni-j5ms4frd8z-whs9k
Namespace: multus-cni-j5ms4frd8z
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: multus-cni-j5ms4frd8z
Node: k3d-k3s-default-server-0/172.18.0.2
Start Time: Sun, 04 Jan 2026 00:10:07 +0000
Labels: app=multus-cni-0.1.0
app.kubernetes.io/instance=multus-cni-j5ms4frd8z
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=multus-cni
app.kubernetes.io/version=4.2.3
controller-revision-hash=859f5cc997
helm-revision=1
helm.sh/chart=multus-cni-0.1.0
pod-template-generation=1
pod.lifecycle=permanent
pod.name=main
release=multus-cni-j5ms4frd8z
Annotations: checksum/cnpg: fc940fff4269c53072f7039f0811133e9b83400670af014010bf50d3f13740f4
checksum/configmaps: d47e9278946456a0b291dae71943594c4361e588d64de54f128178cce0db8394
checksum/mariadb: 09c85576cb45b1eecd1467732b11ea8fa3363b0105c465f02a6ad64991521d52
checksum/mongodb: 09c85576cb45b1eecd1467732b11ea8fa3363b0105c465f02a6ad64991521d52
checksum/persistence: dd25020191459c1f7261b6e1f10e36b932665db363944456a647776bb55a8a3d
checksum/redis: 013343a028cbb3f7e08f4ba7522702dd98e52632c688641074b0b1db3df29894
checksum/secrets: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
checksum/services: 1184b68fa51396c57f43de4dc2caf86dc0e32968ec34ffce589240ebc3fc9bcf
checksum/solr: 29c14feeaddbf7762052db593898d274941f539cee681ddc613957587686f347
Status: Pending
IP: 172.18.0.2
IPs:
IP: 172.18.0.2
Controlled By: DaemonSet/multus-cni-j5ms4frd8z
Init Containers:
multus-cni-j5ms4frd8z-init-multus-plugin-installer:
Container ID:
Image: ghcr.io/k8snetworkplumbingwg/multus-cni:v4.2.3-thick
Image ID:
Port: <none>
Host Port: <none>
SeccompProfile: RuntimeDefault
Command:
/usr/src/multus-cni/bin/install_multus
Args:
-d
/opt/cni/bin
-t
thick
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 1500m
memory: 2400Mi
Requests:
cpu: 10m
memory: 15Mi
Environment:
TZ: UTC
UMASK: 0022
UMASK_SET: 0022
NVIDIA_VISIBLE_DEVICES: void
PUID: 568
USER_ID: 568
UID: 568
PGID: 568
GROUP_ID: 568
GID: 568
S6_READ_ONLY_ROOT: 1
Mounts:
/dev/shm from devshm (rw)
/opt/cni/bin from cnibin (rw)
/shared from shared (rw)
/tmp from tmp (rw)
/var/logs from varlogs (rw)
/var/run from varrun (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxkq4 (ro)
Containers:
multus-cni-j5ms4frd8z:
Container ID:
Image: ghcr.io/k8snetworkplumbingwg/multus-cni:v4.2.3-thick
Image ID:
Port: 8080/TCP (main)
Host Port: 8080/TCP (main)
SeccompProfile: RuntimeDefault
Args:
--config
/host/etc/cni/net.d/multus.d/daemon-config.json
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 200m
memory: 500Mi
Requests:
cpu: 100m
memory: 50Mi
Liveness: exec [sh -c cat "/host/etc/cni/net.d"/00-multus.conf*] delay=12s timeout=5s period=15s #success=1 #failure=5
Readiness: exec [sh -c cat "/host/etc/cni/net.d"/00-multus.conf*] delay=10s timeout=5s period=12s #success=2 #failure=4
Startup: exec [sh -c cat "/host/etc/cni/net.d"/00-multus.conf*] delay=10s timeout=3s period=5s #success=1 #failure=60
Environment:
TZ: UTC
UMASK: 0022
UMASK_SET: 0022
NVIDIA_VISIBLE_DEVICES: void
PUID: 568
USER_ID: 568
UID: 568
PGID: 568
GROUP_ID: 568
GID: 568
S6_READ_ONLY_ROOT: 1
MULTUS_NODE_NAME: (v1:spec.nodeName)
Mounts:
/dev/shm from devshm (rw)
/host/etc/cni/net.d from cniconf (rw)
/host/etc/cni/net.d/multus.d/daemon-config.json from daemonconfig (ro,path="daemon-config.json")
/host/run from hostrun (rw)
/hostroot from hostroot (rw)
/opt/cni/bin from cnibin (rw)
/run/k8s.cni.cncf.io from hostrunk8scnicncfio (rw)
/shared from shared (rw)
/tmp from tmp (rw)
/var/lib/cni/multus from cnimultusdata (rw)
/var/lib/kubelet from hostvarlibkubelet (rw)
/var/logs from varlogs (rw)
/var/run from varrun (rw)
/var/run/netns from hostrunnetns (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxkq4 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
daemonconfig:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: multus-cni-j5ms4frd8z-config
Optional: false
devshm:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: 2400Mi
shared:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: 2400Mi
varlogs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: 2400Mi
varrun:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: 2400Mi
cnibin:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType: DirectoryOrCreate
cniconf:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType: DirectoryOrCreate
cnimultusdata:
Type: HostPath (bare host directory volume)
Path: /var/lib/cni/multus
HostPathType: DirectoryOrCreate
cnimultusdatacleanup:
Type: HostPath (bare host directory volume)
Path: /var/lib/cni
HostPathType: DirectoryOrCreate
hostroot:
Type: HostPath (bare host directory volume)
Path: /
HostPathType: Directory
hostrun:
Type: HostPath (bare host directory volume)
Path: /run
HostPathType: Directory
hostrunk8scnicncfio:
Type: HostPath (bare host directory volume)
Path: /run/k8s.cni.cncf.io
HostPathType: DirectoryOrCreate
hostrunnetns:
Type: HostPath (bare host directory volume)
Path: /run/netns
HostPathType: DirectoryOrCreate
hostvarlibkubelet:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet
HostPathType: Directory
kube-api-access-hxkq4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/arch=amd64
Tolerations: op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<== Description of pod multus-cni-j5ms4frd8z-whs9k
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
------------------------------------------------------------------------------------------------------------------------
==> Logs of init container multus-cni-j5ms4frd8z-whs9k
------------------------------------------------------------------------------------------------------------------------
Error from server (BadRequest): container "multus-cni-j5ms4frd8z-init-multus-plugin-installer" in pod "multus-cni-j5ms4frd8z-whs9k" is waiting to start: PodInitializing
Error printing details: failed waiting for process: exit status 1
------------------------------------------------------------------------------------------------------------------------
==> Logs of container multus-cni-j5ms4frd8z-whs9k
------------------------------------------------------------------------------------------------------------------------
Error from server (BadRequest): container "multus-cni-j5ms4frd8z" in pod "multus-cni-j5ms4frd8z-whs9k" is waiting to start: PodInitializing
Error printing details: failed waiting for process: exit status 1
========================================================================================================================
Deleting release "multus-cni-j5ms4frd8z"...
release "multus-cni-j5ms4frd8z" uninstalled
Deleting namespace "multus-cni-j5ms4frd8z"...
namespace "multus-cni-j5ms4frd8z" deleted
Namespace "multus-cni-j5ms4frd8z" terminated.
------------------------------------------------------------------------------------------------------------------------
✔︎ multus-cni => (version: "0.1.0", path: "charts/incubator/multus-cni")
------------------------------------------------------------------------------------------------------------------------
All charts installed successfullyChart Configuration
Daemonset workload with additional pod options like so:
podOptions:
hostNetwork: true
hostPID: true
automountServiceAccountToken: true
priorityClassName: system-node-critical
tolerations:
- operator: ExistsDescribe the bug
Found as part of #43365
CI run showing the issue: https://github.com/trueforge-org/truecharts/actions/runs/20684829523/job/59383843736
Summary:
The chart is deployed, and CI appears to pass (all green), but upon inspecting the logs, it can be seen that the containers have been scheduled successfully, yet never started. However, this only happens if the workload is set to a DaemonSet. Once it is changed to a StatefulSet the containers are scheduled and started successfully.
To Reproduce
See Describe the bug
Expected Behavior
Daemonsets are deployed as normal, and the workloads are started
Screenshots
No response
Additional Context
No response
I've read and agree with the following
- I've checked all open and closed issues and my issue is not there.
- I've prefixed my issue title with
[Chart-Name]
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working