-
Notifications
You must be signed in to change notification settings - Fork 105
Open
Description
Issue found on release-1.33 branch with version v1.33.2+rke2r1
Issue found on release-1.32 branch with version v1.32.6+rke2r1
Issue found on release-1.31 branch with version v1.31.10+rke2r1
Issue found on release-1.30 branch with version v1.30.14+rke2r1
Environment Details
Infrastructure
- Cloud
- Hosted
Node(s) CPU architecture, OS, and Version:
cat /etc/os-release | grep PRETTY
PRETTY_NAME="Red Hat Enterprise Linux 10.0 (Coughlan)"
$ uname -m
x86_64
Cluster Configuration:
1 server/ 1 agent
Config.yaml:
token: xxxx
write-kubeconfig-mode: "0644"
node-external-ip: 1.1.1.1
selinux: true
debug: true
Different cni values tried:
cni: multus,calico
cni: cilium
and default cni (canal)
Was selinux enabled: true or false
$ sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 33
$ rpm -qa | grep selinux
libselinux-3.8-1.el10.x86_64
libselinux-utils-3.8-1.el10.x86_64
python3-libselinux-3.8-1.el10.x86_64
selinux-policy-40.13.26-1.el10.noarch
selinux-policy-targeted-40.13.26-1.el10.noarch
rpm-plugin-selinux-4.19.1.1-12.el10.x86_64
container-selinux-2.235.0-2.el10_0.noarch
rke2-selinux-0.20-1.el8.noarch
Steps to reproduce:
- Copy config.yaml
$ sudo mkdir -p /etc/rancher/rke2 && sudo cp config.yaml /etc/rancher/rke2
- Install RKE2
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION='v1.33.2+rke2r1' INSTALL_RKE2_TYPE='server' INSTALL_RKE2_METHOD=tar sh -
- Start the RKE2 service
$ sudo systemctl enable --now rke2-server
or
$ sudo systemctl enable --now rke2-agent
- Verify Cluster Status:
kubectl get nodes -o wide
kubectl get pods -A
Reproducing Results/Observations:
- rke2 version used for replication:
$ rke2 -v
rke2 version v1.33.2+rke2r1 (62de1b13c6405a51bb4804a1dae6c2a5a17cc090)
go version go1.24.4 X:boringcrypto
multus,calico results:
$ kno
NAME STATUS ROLES AGE VERSION
ip-172-31-21-200.us-east-2.compute.internal NotReady control-plane,etcd,master 50m v1.33.2+rke2r1
ip-172-31-25-66.us-east-2.compute.internal NotReady <none> 48m v1.33.2+rke2r1
[ec2-user@ip-172-31-21-200 ~]$ kpo
NAMESPACE NAME READY STATUS RESTARTS AGE
auto-clusterip test-clusterip-75c677b668-4sr78 0/1 Pending 0 37m
auto-clusterip test-clusterip-75c677b668-w2txz 0/1 Pending 0 37m
auto-dns dnsutils 0/1 Pending 0 37m
auto-ingress test-ingress-gppgq 0/1 Pending 0 37m
auto-ingress test-ingress-hrjbh 0/1 Pending 0 37m
auto-nodeport test-nodeport-694f69f944-g4ghj 0/1 Pending 0 37m
auto-nodeport test-nodeport-694f69f944-p2j82 0/1 Pending 0 37m
clusterip clusterip-pod-demo 0/1 Pending 0 37m
clusterip clusterip-pod-demo-2 0/1 Pending 0 37m
clusterip clusterip-pod-demo-3 0/1 Pending 0 37m
kube-system cloud-controller-manager-ip-172-31-21-200.us-east-2.compute.internal 1/1 Running 0 50m
kube-system etcd-ip-172-31-21-200.us-east-2.compute.internal 1/1 Running 0 50m
kube-system helm-install-rke2-calico-7tmqt 0/1 Completed 1 50m
kube-system helm-install-rke2-calico-crd-s8jjg 0/1 Completed 0 50m
kube-system helm-install-rke2-coredns-69n4g 0/1 Completed 0 50m
kube-system helm-install-rke2-ingress-nginx-n7fld 0/1 Pending 0 50m
kube-system helm-install-rke2-metrics-server-9l9dr 0/1 Pending 0 50m
kube-system helm-install-rke2-multus-nv6qz 0/1 Completed 0 50m
kube-system helm-install-rke2-runtimeclasses-5cdfz 0/1 Pending 0 50m
kube-system helm-install-rke2-snapshot-controller-crd-jqmhg 0/1 Pending 0 50m
kube-system helm-install-rke2-snapshot-controller-jtkzx 0/1 Pending 0 50m
kube-system kube-apiserver-ip-172-31-21-200.us-east-2.compute.internal 1/1 Running 0 50m
kube-system kube-controller-manager-ip-172-31-21-200.us-east-2.compute.internal 1/1 Running 0 50m
kube-system kube-proxy-ip-172-31-21-200.us-east-2.compute.internal 0/1 Running 14 (7s ago) 50m
kube-system kube-proxy-ip-172-31-25-66.us-east-2.compute.internal 0/1 CrashLoopBackOff 12 (5m9s ago) 48m
kube-system kube-scheduler-ip-172-31-21-200.us-east-2.compute.internal 1/1 Running 0 50m
kube-system rke2-coredns-rke2-coredns-65dc69968-dldgs 0/1 Pending 0 50m
kube-system rke2-coredns-rke2-coredns-autoscaler-68d5f76f7-bfdhc 0/1 Pending 0 50m
kube-system rke2-multus-clzgl 0/1 CrashLoopBackOff 14 (59s ago) 48m
kube-system rke2-multus-snl7l 0/1 CrashLoopBackOff 14 (3m32s ago) 50m
more-clusterip test-clusterip-75c677b668-c4rff 0/1 Pending 0 33m
more-clusterip test-clusterip-75c677b668-f56w6 0/1 Pending 0 33m
more-dns dnsutils 0/1 Pending 0 33m
more-ingress test-ingress-mgj8s 0/1 Pending 0 33m
more-ingress test-ingress-vk5wq 0/1 Pending 0 33m
more-nodeport test-nodeport-694f69f944-4x7z9 0/1 Pending 0 33m
more-nodeport test-nodeport-694f69f944-zbzbc 0/1 Pending 0 33m
tigera-operator tigera-operator-fcbdc5c89-9cnks 1/1 Running 0 50m
Describe pod rke2-multus-clzgl:
$ kspd rke2-multus-clzgl
Name: rke2-multus-clzgl
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: multus
Node: ip-172-31-25-66.us-east-2.compute.internal/172.31.25.66
Start Time: Wed, 09 Jul 2025 18:31:36 +0000
Labels: app=rke2-multus
controller-revision-hash=688799fc76
pod-template-generation=1
tier=node
Annotations: checksum/config: fd23672d720fdddd8faf8f629c7eec5c444ad6484d2a654621fa7d4b5c65c4fc
Status: Running
IP: 172.31.25.66
IPs:
IP: 172.31.25.66
Controlled By: DaemonSet/rke2-multus
Init Containers:
cni-plugins:
Container ID: containerd://a662cf8406bc85bb82db7c6df9fac8d0656fe3a1611acf26c0256e6b15aaac0e
Image: rancher/hardened-cni-plugins:v1.7.1-build20250611
Image ID: docker.io/rancher/hardened-cni-plugins@sha256:e3781380ebf29eefe13ef616e959879624958cc34b0039bffacfc03aa3eb5833
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 09 Jul 2025 18:31:39 +0000
Finished: Wed, 09 Jul 2025 18:31:39 +0000
Ready: True
Restart Count: 0
Environment:
SKIP_CNI_BINARIES: flannel
Mounts:
/host/opt/cni/bin from cnibin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t6877 (ro)
Containers:
kube-rke2-multus:
Container ID: containerd://248735282be6453edc287dfde15c46fdaaf3e943222463bc5674a85caf5f5420
Image: rancher/hardened-multus-cni:v4.2.1-build20250607
Image ID: docker.io/rancher/hardened-multus-cni@sha256:21ea6877347b774d45c0d59ac948051a29626a2df9c3bcd8abc06f39d542b0df
Port: <none>
Host Port: <none>
Command:
/thin_entrypoint
Args:
--multus-conf-file=auto
--multus-kubeconfig-file-host=/etc/cni/net.d/multus.d/multus.kubeconfig
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 09 Jul 2025 19:19:02 +0000
Finished: Wed, 09 Jul 2025 19:19:02 +0000
Ready: False
Restart Count: 14
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 250m
memory: 128Mi
Environment:
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
Mounts:
/host/etc/cni/net.d from cni (rw)
/host/opt/cni/bin from cnibin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t6877 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cni:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
cnibin:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
kube-api-access-t6877:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: :NoSchedule op=Exists
:NoExecute op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49m default-scheduler Successfully assigned kube-system/rke2-multus-clzgl to ip-172-31-25-66.us-east-2.compute.internal
Normal Pulling 49m kubelet Pulling image "rancher/hardened-cni-plugins:v1.7.1-build20250611"
Normal Pulled 49m kubelet Successfully pulled image "rancher/hardened-cni-plugins:v1.7.1-build20250611" in 2.439s (2.439s including waiting). Image size: 48904234 bytes.
Normal Created 49m kubelet Created container: cni-plugins
Normal Started 49m kubelet Started container cni-plugins
Normal Pulling 49m kubelet Pulling image "rancher/hardened-multus-cni:v4.2.1-build20250607"
Normal Pulled 49m kubelet Successfully pulled image "rancher/hardened-multus-cni:v4.2.1-build20250607" in 1.747s (1.747s including waiting). Image size: 36934123 bytes.
Normal Created 43m (x7 over 49m) kubelet Created container: kube-rke2-multus
Normal Started 43m (x7 over 49m) kubelet Started container kube-rke2-multus
Warning BackOff 3m52s (x208 over 49m) kubelet Back-off restarting failed container kube-rke2-multus in pod rke2-multus-clzgl_kube-system(a91b56dc-7127-4a08-8b96-45ea40a3287c)
Normal Pulled 113s (x14 over 49m) kubelet Container image "rancher/hardened-multus-cni:v4.2.1-build20250607" already present on machine
Pod logs:
$ kspl rke2-multus-clzgl
Defaulted container "kube-rke2-multus" out of: kube-rke2-multus, cni-plugins (init)
kubeconfig is created in /host/etc/cni/net.d/multus.d/multus.kubeconfig
kubeconfig file is created.
failed to create multus config: cannot find valid master CNI config in "/host/etc/cni/net.d"
$ kspl rke2-multus-clzgl -c kube-rke2-multus
kubeconfig is created in /host/etc/cni/net.d/multus.d/multus.kubeconfig
failed to create multus config: cannot find valid master CNI config in "/host/etc/cni/net.d"
kubeconfig file is created.
canal results:
$ kno
NAME STATUS ROLES AGE VERSION
ip-172-31-17-160.us-east-2.compute.internal NotReady <none> 23m v1.31.10+rke2r1
ip-172-31-28-209.us-east-2.compute.internal NotReady control-plane,etcd,master 25m v1.31.10+rke2r1
[ec2-user@ip-172-31-28-209 ~]$ kpo
NAMESPACE NAME READY STATUS RESTARTS AGE
auto-clusterip test-clusterip-6b86dc97bd-2vqlm 0/1 Pending 0 10m
auto-clusterip test-clusterip-6b86dc97bd-hv4mc 0/1 Pending 0 10m
auto-dns dnsutils 0/1 Pending 0 10m
auto-ingress test-ingress-2bh77 0/1 Pending 0 10m
auto-ingress test-ingress-4nhf2 0/1 Pending 0 10m
auto-nodeport test-nodeport-655c76c448-dmlvl 0/1 Pending 0 10m
auto-nodeport test-nodeport-655c76c448-s62l8 0/1 Pending 0 10m
clusterip clusterip-pod-demo 0/1 Pending 0 10m
clusterip clusterip-pod-demo-2 0/1 Pending 0 10m
clusterip clusterip-pod-demo-3 0/1 Pending 0 10m
kube-system cloud-controller-manager-ip-172-31-28-209.us-east-2.compute.internal 1/1 Running 0 25m
kube-system etcd-ip-172-31-28-209.us-east-2.compute.internal 1/1 Running 0 25m
kube-system helm-install-rke2-canal-4rjg5 0/1 Completed 0 25m
kube-system helm-install-rke2-coredns-4jkhf 0/1 Completed 0 25m
kube-system helm-install-rke2-ingress-nginx-pgvf6 0/1 Pending 0 25m
kube-system helm-install-rke2-metrics-server-xdxwv 0/1 Pending 0 25m
kube-system helm-install-rke2-runtimeclasses-tg97j 0/1 Pending 0 25m
kube-system helm-install-rke2-snapshot-controller-55w5d 0/1 Pending 0 25m
kube-system helm-install-rke2-snapshot-controller-crd-wmkhz 0/1 Pending 0 25m
kube-system kube-apiserver-ip-172-31-28-209.us-east-2.compute.internal 1/1 Running 0 25m
kube-system kube-controller-manager-ip-172-31-28-209.us-east-2.compute.internal 1/1 Running 0 25m
kube-system kube-proxy-ip-172-31-17-160.us-east-2.compute.internal 1/1 Running 8 (99s ago) 23m
kube-system kube-proxy-ip-172-31-28-209.us-east-2.compute.internal 0/1 CrashLoopBackOff 8 (2m8s ago) 25m
kube-system kube-scheduler-ip-172-31-28-209.us-east-2.compute.internal 1/1 Running 0 25m
kube-system rke2-canal-mzt6d 0/2 Init:CrashLoopBackOff 8 (4m38s ago) 25m
kube-system rke2-canal-r5jrk 0/2 Init:CrashLoopBackOff 8 (2m19s ago) 23m
kube-system rke2-coredns-rke2-coredns-7976d868d8-q22hg 0/1 Pending 0 25m
kube-system rke2-coredns-rke2-coredns-autoscaler-f76878df-s6wzq 0/1 Pending 0 25m
more-clusterip test-clusterip-6b86dc97bd-2lmms 0/1 Pending 0 7m9s
more-clusterip test-clusterip-6b86dc97bd-788rd 0/1 Pending 0 7m9s
more-dns dnsutils 0/1 Pending 0 7m9s
more-ingress test-ingress-kqnmf 0/1 Pending 0 7m9s
more-ingress test-ingress-l7xhf 0/1 Pending 0 7m9s
more-nodeport test-nodeport-655c76c448-94kvr 0/1 Pending 0 7m9s
more-nodeport test-nodeport-655c76c448-qhdfh 0/1 Pending 0 7m9s
[ec2-user@ip-172-31-28-209 ~]$ kspl rke2-canal-mzt6d
Defaulted container "calico-node" out of: calico-node, kube-flannel, install-cni (init), flexvol-driver (init)
Error from server (BadRequest): container "calico-node" in pod "rke2-canal-mzt6d" is waiting to start: PodInitializing
[ec2-user@ip-172-31-28-209 ~]$ kspd rke2-canal-mzt6d
Name: rke2-canal-mzt6d
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: canal
Node: ip-172-31-28-209.us-east-2.compute.internal/172.31.28.209
Start Time: Wed, 09 Jul 2025 18:33:55 +0000
Labels: controller-revision-hash=6f475fd6d8
k8s-app=canal
pod-template-generation=1
Annotations: <none>
Status: Pending
IP: 172.31.28.209
IPs:
IP: 172.31.28.209
Controlled By: DaemonSet/rke2-canal
Init Containers:
install-cni:
Container ID: containerd://42acdca6783c60a781384493400726bab90bcb28e06abf450c812fcf8949085c
Image: rancher/hardened-calico:v3.30.1-build20250611
Image ID: docker.io/rancher/hardened-calico@sha256:a6325cd34a9ed664d23b443046fe387879377d987f6624262b55acaffe246c20
Port: <none>
Host Port: <none>
Command:
/opt/cni/bin/install
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 09 Jul 2025 18:54:27 +0000
Finished: Wed, 09 Jul 2025 18:54:58 +0000
Ready: False
Restart Count: 8
Environment:
CALICO_CNI_SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
CNI_CONF_NAME: 10-canal.conflist
CNI_NETWORK_CONFIG: <set to the key 'cni_network_config' of config map 'rke2-canal-config'> Optional: false
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
CNI_MTU: <set to the key 'veth_mtu' of config map 'rke2-canal-config'> Optional: false
SLEEP: false
Mounts:
/host/etc/cni/net.d from cni-net-dir (rw)
/host/opt/cni/bin from cni-bin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fwgm (ro)
flexvol-driver:
Container ID:
Image: rancher/hardened-calico:v3.30.1-build20250611
Image ID:
Port: <none>
Host Port: <none>
Command:
/usr/local/bin/flexvol.sh
-s
/usr/local/bin/flexvol
-i
flexvoldriver
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/host/driver from flexvol-driver-host (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fwgm (ro)
Containers:
calico-node:
Container ID:
Image: rancher/hardened-calico:v3.30.1-build20250611
Image ID:
Port: <none>
Host Port: <none>
Command:
start_runit
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 250m
Liveness: exec [/usr/bin/calico-node -felix-live] delay=10s timeout=10s period=10s #success=1 #failure=6
Readiness: http-get http://localhost:9099/readiness delay=0s timeout=10s period=10s #success=1 #failure=3
Environment:
DATASTORE_TYPE: kubernetes
USE_POD_CIDR: true
WAIT_FOR_DATASTORE: true
NODENAME: (v1:spec.nodeName)
CALICO_CNI_SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
CALICO_NETWORKING_BACKEND: none
CLUSTER_TYPE: k8s,canal
FELIX_IPTABLESREFRESHINTERVAL: 60
FELIX_IPTABLESBACKEND: auto
CALICO_DISABLE_FILE_LOGGING: true
FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT
FELIX_IPV6SUPPORT: false
FELIX_LOGSEVERITYSCREEN: info
FELIX_HEALTHENABLED: true
FELIX_PROMETHEUSMETRICSENABLED: true
FELIX_XDPENABLED: false
FELIX_FAILSAFEINBOUNDHOSTPORTS:
FELIX_FAILSAFEOUTBOUNDHOSTPORTS:
FELIX_IPTABLESMARKMASK: 0xffff0000
IP_AUTODETECTION_METHOD: first-found
Mounts:
/host/etc/cni/net.d from cni-net-dir (rw)
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/var/lib/calico from var-lib-calico (rw)
/var/log/calico/cni from cni-log-dir (ro)
/var/run/calico from var-run-calico (rw)
/var/run/nodeagent from policysync (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fwgm (ro)
kube-flannel:
Container ID:
Image: rancher/hardened-flannel:v0.27.0-build20250611
Image ID:
Port: <none>
Host Port: <none>
Command:
/opt/bin/flanneld
--ip-masq
--kube-subnet-mgr
--iptables-forward-rules=false
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
POD_NAME: rke2-canal-mzt6d (v1:metadata.name)
POD_NAMESPACE: kube-system (v1:metadata.namespace)
FLANNELD_IFACE: <set to the key 'canal_iface' of config map 'rke2-canal-config'> Optional: false
FLANNELD_IFACE_REGEX: <set to the key 'canal_iface_regex' of config map 'rke2-canal-config'> Optional: false
FLANNELD_IP_MASQ: <set to the key 'masquerade' of config map 'rke2-canal-config'> Optional: false
Mounts:
/etc/kube-flannel/ from flannel-cfg (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fwgm (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
var-run-calico:
Type: HostPath (bare host directory volume)
Path: /var/run/calico
HostPathType:
var-lib-calico:
Type: HostPath (bare host directory volume)
Path: /var/lib/calico
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
flannel-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rke2-canal-config
Optional: false
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
cni-net-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
cni-log-dir:
Type: HostPath (bare host directory volume)
Path: /var/log/calico/cni
HostPathType:
policysync:
Type: HostPath (bare host directory volume)
Path: /var/run/nodeagent
HostPathType: DirectoryOrCreate
flexvol-driver-host:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/volumeplugins/nodeagent~uds
HostPathType: DirectoryOrCreate
kube-api-access-4fwgm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: :NoSchedule op=Exists
:NoExecute op=Exists
CriticalAddonsOnly op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned kube-system/rke2-canal-mzt6d to ip-172-31-28-209.us-east-2.compute.internal
Normal Pulling 26m kubelet Pulling image "rancher/hardened-calico:v3.30.1-build20250611"
Normal Pulled 25m kubelet Successfully pulled image "rancher/hardened-calico:v3.30.1-build20250611" in 10.582s (10.582s including waiting). Image size: 230367596 bytes.
Normal Created 22m (x5 over 25m) kubelet Created container: install-cni
Normal Pulled 22m (x4 over 25m) kubelet Container image "rancher/hardened-calico:v3.30.1-build20250611" already present on machine
Normal Started 22m (x5 over 25m) kubelet Started container install-cni
Warning BackOff 64s (x93 over 24m) kubelet Back-off restarting failed container install-cni in pod rke2-canal-mzt6d_kube-system(0f51f2ee-545a-43ff-a082-bbede0a95c93)
[ec2-user@ip-172-31-28-209 ~]$ kspl rke2-canal-mzt6d
Defaulted container "calico-node" out of: calico-node, kube-flannel, install-cni (init), flexvol-driver (init)
Error from server (BadRequest): container "calico-node" in pod "rke2-canal-mzt6d" is waiting to start: PodInitializing
[ec2-user@ip-172-31-28-209 ~]$ kspl rke2-canal-mzt6d -c install-cni
2025-07-09 19:00:02.177 [INFO][1] cni-installer/install.go 139: Running as a Kubernetes pod
2025-07-09 19:00:02.209 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/bandwidth"
2025-07-09 19:00:02.233 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/bandwidth
2025-07-09 19:00:02.252 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/bond"
2025-07-09 19:00:02.262 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/bond
2025-07-09 19:00:02.285 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/bridge"
2025-07-09 19:00:02.308 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/bridge
2025-07-09 19:00:02.457 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/calico"
2025-07-09 19:00:02.627 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/calico
2025-07-09 19:00:02.790 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/calico-ipam"
2025-07-09 19:00:02.926 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/calico-ipam
2025-07-09 19:00:02.992 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/dhcp"
2025-07-09 19:00:03.034 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/dhcp
2025-07-09 19:00:03.065 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/dummy"
2025-07-09 19:00:03.072 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/dummy
2025-07-09 19:00:03.084 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/firewall"
2025-07-09 19:00:03.115 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/firewall
2025-07-09 19:00:03.119 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/flannel"
2025-07-09 19:00:03.126 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/flannel
2025-07-09 19:00:03.134 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/host-device"
2025-07-09 19:00:03.146 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/host-device
2025-07-09 19:00:03.165 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/host-local"
2025-07-09 19:00:03.190 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/host-local
2025-07-09 19:00:03.215 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/ipvlan"
2025-07-09 19:00:03.227 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/ipvlan
2025-07-09 19:00:03.233 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/loopback"
2025-07-09 19:00:03.238 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/loopback
2025-07-09 19:00:03.265 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/macvlan"
2025-07-09 19:00:03.280 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/macvlan
2025-07-09 19:00:03.295 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/portmap"
2025-07-09 19:00:03.317 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/portmap
2025-07-09 19:00:03.329 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/ptp"
2025-07-09 19:00:03.338 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/ptp
2025-07-09 19:00:03.345 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/sbr"
2025-07-09 19:00:03.365 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/sbr
2025-07-09 19:00:03.367 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/static"
2025-07-09 19:00:03.371 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/static
2025-07-09 19:00:03.392 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/tap"
2025-07-09 19:00:03.424 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/tap
2025-07-09 19:00:03.429 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/tuning"
2025-07-09 19:00:03.435 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/tuning
2025-07-09 19:00:03.452 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/vlan"
2025-07-09 19:00:03.475 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/vlan
2025-07-09 19:00:03.491 [INFO][1] cni-installer/install.go 436: File is already up to date, skipping file="/host/opt/cni/bin/vrf"
2025-07-09 19:00:03.502 [INFO][1] cni-installer/install.go 225: Installed /host/opt/cni/bin/vrf
2025-07-09 19:00:03.502 [INFO][1] cni-installer/install.go 229: Wrote Calico CNI binaries to /host/opt/cni/bin
2025-07-09 19:00:03.502 [INFO][1] cni-installer/install.go 234: CNI plugin version: v3.30.1
2025-07-09 19:00:03.502 [INFO][1] cni-installer/install.go 186: /host/secondary-bin-dir is not writeable, skipping
2025-07-09 19:00:03.502 [INFO][1] cni-installer/winutils.go 149: Neither --kubeconfig nor --master was specified. Using the inClusterConfig.
cilium results:
$ kno
NAME STATUS ROLES AGE VERSION
ip-172-31-21-99.us-east-2.compute.internal NotReady control-plane,etcd,master 28m v1.33.2+rke2r1
ip-172-31-31-223.us-east-2.compute.internal NotReady <none> 24m v1.33.2+rke2r1
[ec2-user@ip-172-31-21-99 ~]$ kpo
NAMESPACE NAME READY STATUS RESTARTS AGE
auto-clusterip test-clusterip-75c677b668-cjwjn 0/1 Pending 0 10m
auto-clusterip test-clusterip-75c677b668-zc494 0/1 Pending 0 10m
auto-dns dnsutils 0/1 Pending 0 10m
auto-ingress test-ingress-44rhf 0/1 Pending 0 10m
auto-ingress test-ingress-45d2s 0/1 Pending 0 10m
auto-nodeport test-nodeport-694f69f944-stfdh 0/1 Pending 0 10m
auto-nodeport test-nodeport-694f69f944-v792b 0/1 Pending 0 10m
clusterip clusterip-pod-demo 0/1 Pending 0 11m
clusterip clusterip-pod-demo-2 0/1 Pending 0 11m
clusterip clusterip-pod-demo-3 0/1 Pending 0 11m
kube-system cilium-5d4pp 0/1 Init:CrashLoopBackOff 8 (106s ago) 28m
kube-system cilium-operator-7b55f8f6bb-87bp8 0/1 CrashLoopBackOff 9 (3m15s ago) 28m
kube-system cilium-operator-7b55f8f6bb-mpkjs 0/1 Running 10 (5m32s ago) 28m
kube-system cilium-vt7xv 0/1 Init:1/7 8 (5m14s ago) 24m
kube-system cloud-controller-manager-ip-172-31-21-99.us-east-2.compute.internal 1/1 Running 0 28m
kube-system etcd-ip-172-31-21-99.us-east-2.compute.internal 1/1 Running 0 28m
kube-system helm-install-rke2-cilium-c28m2 0/1 Completed 0 28m
kube-system helm-install-rke2-coredns-b2v9c 0/1 Completed 0 28m
kube-system helm-install-rke2-ingress-nginx-6sq4b 0/1 Pending 0 28m
kube-system helm-install-rke2-metrics-server-w2qpq 0/1 Pending 0 28m
kube-system helm-install-rke2-runtimeclasses-cpgn7 0/1 Pending 0 28m
kube-system helm-install-rke2-snapshot-controller-crd-4bk8m 0/1 Pending 0 28m
kube-system helm-install-rke2-snapshot-controller-ts5tl 0/1 Pending 0 28m
kube-system kube-apiserver-ip-172-31-21-99.us-east-2.compute.internal 1/1 Running 0 28m
kube-system kube-controller-manager-ip-172-31-21-99.us-east-2.compute.internal 1/1 Running 0 28m
kube-system kube-proxy-ip-172-31-21-99.us-east-2.compute.internal 0/1 CrashLoopBackOff 8 (4m52s ago) 28m
kube-system kube-proxy-ip-172-31-31-223.us-east-2.compute.internal 0/1 CrashLoopBackOff 8 (71s ago) 24m
kube-system kube-scheduler-ip-172-31-21-99.us-east-2.compute.internal 1/1 Running 0 28m
kube-system rke2-coredns-rke2-coredns-65dc69968-5z5bd 0/1 Pending 0 28m
kube-system rke2-coredns-rke2-coredns-autoscaler-68d5f76f7-h8xmg 0/1 Pending 0 28m
more-clusterip test-clusterip-75c677b668-f9ds6 0/1 Pending 0 6m57s
more-clusterip test-clusterip-75c677b668-nvwrm 0/1 Pending 0 6m57s
more-dns dnsutils 0/1 Pending 0 6m57s
more-ingress test-ingress-n2pdl 0/1 Pending 0 6m57s
more-ingress test-ingress-z2fk6 0/1 Pending 0 6m57s
more-nodeport test-nodeport-694f69f944-8qwjr 0/1 Pending 0 6m57s
more-nodeport test-nodeport-694f69f944-xfvvl 0/1 Pending 0 6m57s
[ec2-user@ip-172-31-21-99 ~]$ kspd cilium-5d4pp
Name: cilium-5d4pp
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: cilium
Node: ip-172-31-21-99.us-east-2.compute.internal/172.31.21.99
Start Time: Wed, 09 Jul 2025 18:37:10 +0000
Labels: app.kubernetes.io/name=cilium-agent
app.kubernetes.io/part-of=cilium
controller-revision-hash=7ccf5d9c4c
k8s-app=cilium
pod-template-generation=1
Annotations: container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfined
container.apparmor.security.beta.kubernetes.io/cilium-agent: unconfined
container.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfined
container.apparmor.security.beta.kubernetes.io/config: unconfined
container.apparmor.security.beta.kubernetes.io/install-cni-binaries: unconfined
container.apparmor.security.beta.kubernetes.io/install-portmap-cni-plugin: unconfined
container.apparmor.security.beta.kubernetes.io/mount-bpf-fs: unconfined
container.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfined
Status: Pending
IP: 172.31.21.99
IPs:
IP: 172.31.21.99
Controlled By: DaemonSet/cilium
Init Containers:
install-portmap-cni-plugin:
Container ID: containerd://b8636c9cb20360f6d01ecce88a0ba4a379d7ac0e025b5c2839ac9b5cd977cf69
Image: rancher/hardened-cni-plugins:v1.7.1-build20250611
Image ID: docker.io/rancher/hardened-cni-plugins@sha256:e3781380ebf29eefe13ef616e959879624958cc34b0039bffacfc03aa3eb5833
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 09 Jul 2025 18:37:19 +0000
Finished: Wed, 09 Jul 2025 18:37:19 +0000
Ready: True
Restart Count: 0
Environment:
SKIP_CNI_BINARIES: bandwidth,bridge,dhcp,firewall,flannel,host-device,host-local,ipvlan,loopback,macvlan,ptp,sbr,static,tuning,vlan,vrf
Mounts:
/host/opt/cni/bin from cni-path (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6bsn (ro)
config:
Container ID: containerd://8ce4a5d94dcf651f765c8581523129228d16e88a0a53515dba848e1d405e90c4
Image: rancher/mirrored-cilium-cilium:v1.17.4
Image ID: docker.io/rancher/mirrored-cilium-cilium@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a
Port: <none>
Host Port: <none>
Command:
cilium-dbg
build-config
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Message: Running
2025/07/09 19:02:31 INFO Starting hive
time="2025-07-09T19:02:31.520954352Z" level=info msg="Establishing connection to apiserver" host="https://10.43.0.1:443" subsys=k8s-client
time="2025-07-09T19:03:06.53999759Z" level=info msg="Establishing connection to apiserver" host="https://10.43.0.1:443" subsys=k8s-client
time="2025-07-09T19:03:36.540572212Z" level=error msg="Unable to contact k8s api-server" error="Get \"https://10.43.0.1:443/api/v1/namespaces/kube-system\": dial tcp 10.43.0.1:443: i/o timeout" ipAddr="https://10.43.0.1:443" subsys=k8s-client
2025/07/09 19:03:36 ERROR Start hook failed function="client.(*compositeClientset).onStart (k8s-client)" error="Get \"https://10.43.0.1:443/api/v1/namespaces/kube-system\": dial tcp 10.43.0.1:443: i/o timeout"
2025/07/09 19:03:36 ERROR Start failed error="Get \"https://10.43.0.1:443/api/v1/namespaces/kube-system\": dial tcp 10.43.0.1:443: i/o timeout" duration=1m5.01977347s
2025/07/09 19:03:36 INFO Stopping
Error: Build config failed: failed to start: Get "https://10.43.0.1:443/api/v1/namespaces/kube-system": dial tcp 10.43.0.1:443: i/o timeout
Exit Code: 1
Started: Wed, 09 Jul 2025 19:02:31 +0000
Finished: Wed, 09 Jul 2025 19:03:36 +0000
Ready: False
Restart Count: 8
Environment:
K8S_NODE_NAME: (v1:spec.nodeName)
CILIUM_K8S_NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/tmp from tmp (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6bsn (ro)
mount-cgroup:
Container ID:
Image: rancher/mirrored-cilium-cilium:v1.17.4
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-ec
cp /usr/bin/cilium-mount /hostbin/cilium-mount;
nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
rm /hostbin/cilium-mount
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
CGROUP_ROOT: /run/cilium/cgroupv2
BIN_PATH: /opt/cni/bin
Mounts:
/hostbin from cni-path (rw)
/hostproc from hostproc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6bsn (ro)
apply-sysctl-overwrites:
Container ID:
Image: rancher/mirrored-cilium-cilium:v1.17.4
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-ec
cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;
nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";
rm /hostbin/cilium-sysctlfix
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
BIN_PATH: /opt/cni/bin
Mounts:
/hostbin from cni-path (rw)
/hostproc from hostproc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6bsn (ro)
mount-bpf-fs:
Container ID:
Image: rancher/mirrored-cilium-cilium:v1.17.4
Image ID:
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
--
Args:
mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/sys/fs/bpf from bpf-maps (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6bsn (ro)
clean-cilium-state:
Container ID:
Image: rancher/mirrored-cilium-cilium:v1.17.4
Image ID:
Port: <none>
Host Port: <none>
Command:
/init-container.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
CILIUM_ALL_STATE: <set to the key 'clean-cilium-state' of config map 'cilium-config'> Optional: true
CILIUM_BPF_STATE: <set to the key 'clean-cilium-bpf-state' of config map 'cilium-config'> Optional: true
WRITE_CNI_CONF_WHEN_READY: <set to the key 'write-cni-conf-when-ready' of config map 'cilium-config'> Optional: true
Mounts:
/run/cilium/cgroupv2 from cilium-cgroup (rw)
/sys/fs/bpf from bpf-maps (rw)
/var/run/cilium from cilium-run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6bsn (ro)
install-cni-binaries:
Container ID:
Image: rancher/mirrored-cilium-cilium:v1.17.4
Image ID:
Port: <none>
Host Port: <none>
Command:
/install-plugin.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 10Mi
Environment: <none>
Mounts:
/host/opt/cni/bin from cni-path (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6bsn (ro)
Containers:
cilium-agent:
Container ID:
Image: rancher/mirrored-cilium-cilium:v1.17.4
Image ID:
Port: <none>
Host Port: <none>
Command:
cilium-agent
Args:
--config-dir=/tmp/cilium/config-map
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=10
Readiness: http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=3
Startup: http-get http://127.0.0.1:9879/healthz delay=5s timeout=1s period=2s #success=1 #failure=105
Environment:
K8S_NODE_NAME: (v1:spec.nodeName)
CILIUM_K8S_NAMESPACE: kube-system (v1:metadata.namespace)
CILIUM_CLUSTERMESH_CONFIG: /var/lib/cilium/clustermesh/
GOMEMLIMIT: node allocatable (limits.memory)
Mounts:
/host/etc/cni/net.d from etc-cni-netd (rw)
/host/proc/sys/kernel from host-proc-sys-kernel (rw)
/host/proc/sys/net from host-proc-sys-net (rw)
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/sys/fs/bpf from bpf-maps (rw)
/tmp from tmp (rw)
/var/lib/cilium/clustermesh from clustermesh-secrets (ro)
/var/run/cilium from cilium-run (rw)
/var/run/cilium/netns from cilium-netns (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6bsn (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
cilium-run:
Type: HostPath (bare host directory volume)
Path: /var/run/cilium
HostPathType: DirectoryOrCreate
cilium-netns:
Type: HostPath (bare host directory volume)
Path: /var/run/netns
HostPathType: DirectoryOrCreate
bpf-maps:
Type: HostPath (bare host directory volume)
Path: /sys/fs/bpf
HostPathType: DirectoryOrCreate
hostproc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType: Directory
cilium-cgroup:
Type: HostPath (bare host directory volume)
Path: /run/cilium/cgroupv2
HostPathType: DirectoryOrCreate
cni-path:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType: DirectoryOrCreate
etc-cni-netd:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType: DirectoryOrCreate
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
clustermesh-secrets:
Type: Projected (a volume that contains injected data from multiple sources)
SecretName: cilium-clustermesh
Optional: true
SecretName: clustermesh-apiserver-remote-cert
Optional: true
SecretName: clustermesh-apiserver-local-cert
Optional: true
host-proc-sys-net:
Type: HostPath (bare host directory volume)
Path: /proc/sys/net
HostPathType: Directory
host-proc-sys-kernel:
Type: HostPath (bare host directory volume)
Path: /proc/sys/kernel
HostPathType: Directory
kube-api-access-q6bsn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28m default-scheduler Successfully assigned kube-system/cilium-5d4pp to ip-172-31-21-99.us-east-2.compute.internal
Normal Pulling 28m kubelet Pulling image "rancher/hardened-cni-plugins:v1.7.1-build20250611"
Normal Pulled 28m kubelet Successfully pulled image "rancher/hardened-cni-plugins:v1.7.1-build20250611" in 7.387s (7.387s including waiting). Image size: 48904234 bytes.
Normal Created 28m kubelet Created container: install-portmap-cni-plugin
Normal Started 28m kubelet Started container install-portmap-cni-plugin
Normal Pulling 28m kubelet Pulling image "rancher/mirrored-cilium-cilium:v1.17.4"
Normal Pulled 27m kubelet Successfully pulled image "rancher/mirrored-cilium-cilium:v1.17.4" in 19.11s (19.11s including waiting). Image size: 271358817 bytes.
Normal Started 15m (x7 over 27m) kubelet Started container config
Normal Created 3m7s (x9 over 27m) kubelet Created container: config
Normal Pulled 3m7s (x8 over 26m) kubelet Container image "rancher/mirrored-cilium-cilium:v1.17.4" already present on machine
Warning BackOff 37s (x82 over 25m) kubelet Back-off restarting failed container config in pod cilium-5d4pp_kube-system(df03d6c9-3203-4863-ab18-fa575375ca12)
[ec2-user@ip-172-31-21-99 ~]$ kspl cilium-5d4pp
Defaulted container "cilium-agent" out of: cilium-agent, install-portmap-cni-plugin (init), config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Error from server (BadRequest): container "cilium-agent" in pod "cilium-5d4pp" is waiting to start: PodInitializing
$ kspl cilium-5d4pp -c install-portmap-cni-plugin
bandwidth is in SKIP_CNI_BINARIES, skipping
copied /opt/cni/bin/bond to /host/opt/cni/bin correctly
bridge is in SKIP_CNI_BINARIES, skipping
dhcp is in SKIP_CNI_BINARIES, skipping
copied /opt/cni/bin/dummy to /host/opt/cni/bin correctly
firewall is in SKIP_CNI_BINARIES, skipping
flannel is in SKIP_CNI_BINARIES, skipping
host-device is in SKIP_CNI_BINARIES, skipping
host-local is in SKIP_CNI_BINARIES, skipping
ipvlan is in SKIP_CNI_BINARIES, skipping
loopback is in SKIP_CNI_BINARIES, skipping
macvlan is in SKIP_CNI_BINARIES, skipping
copied /opt/cni/bin/portmap to /host/opt/cni/bin correctly
ptp is in SKIP_CNI_BINARIES, skipping
sbr is in SKIP_CNI_BINARIES, skipping
static is in SKIP_CNI_BINARIES, skipping
copied /opt/cni/bin/tap to /host/opt/cni/bin correctly
tuning is in SKIP_CNI_BINARIES, skipping
vlan is in SKIP_CNI_BINARIES, skipping
vrf is in SKIP_CNI_BINARIES, skipping
Metadata
Metadata
Assignees
Labels
No labels