-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Description
Environmental Info:
K3s Version:
k3s version v1.34.2+k3s1 (8dac81b)
go version go1.24.9
Node(s) CPU architecture, OS, and Version:
Linux machine.example.com 6.8.0-90-generic #91-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 18 14:14:30 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
Single all-on-one node.
Describe the bug:
When using K3s with CRI-O, things work reasonably well when CRI-O's example CNI bridge config https://github.com/cri-o/cri-o/blob/main/contrib/cni/10-crio-bridge.conflist is used in /etc/cni/net.d. We discussed that setup in #11866 and #11870 and the conclusion was to use the CNI binaries shipped by K3s, with CRI-O configuration
[crio.network]
plugin_dirs = [ "/var/lib/rancher/k3s/data/cni" ]
When testing some additional setups, I tried to use K3s' CNI configuration as well, by adding /var/lib/rancher/k3s/agent/etc/cni/net.d to the configuration:
[crio.network]
plugin_dirs = [ "/var/lib/rancher/k3s/data/cni" ]
network_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"
However, while curl -sfL https://get.k3s.io | sh -s - then passes and the node is reported as Ready, the cluster is stuck with Pods like coredns and metrics-server in the ContainerCreating state.
Steps To Reproduce:
- Have Ubuntu 24.04 VM.
- Upgrade its packages to latest greatest:
sudo apt update && sudo apt upgrade -y - Reboot:
sudo reboot - Install CRI-O 1.34:
CRIO_VERSION=v1.34curl -fsSL https://download.opensuse.org/repositories/isv:/cri-o:/stable:/$CRIO_VERSION/deb/Release.key | gpg --dearmor | sudo tee /etc/apt/keyrings/cri-o-apt-keyring.gpg > /dev/nullecho "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://download.opensuse.org/repositories/isv:/cri-o:/stable:/$CRIO_VERSION/deb/ /" | sudo tee /etc/apt/sources.list.d/cri-o.listsudo apt updatesudo apt install -y cri-o
- Configure CRI-O to use K3s plugins:
( echo '[crio.network]' ; echo 'plugin_dirs = [ "/var/lib/rancher/k3s/data/cni" ]' ; echo 'network_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"' ) | sudo tee /etc/crio/crio.conf.d/20-cni.conf
- Run CRI-O:
sudo systemctl start crio.service - Install K3s:
export INSTALL_K3S_CHANNEL=latestexport INSTALL_K3S_EXEC='--container-runtime-endpoint /var/run/crio/crio.sock'curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig ~/.kube/config --write-kubeconfig-group $( id -g ) --write-kubeconfig-mode 640
- Check the node:
kubectl get node - After a reasonable amount of time, check the Pods:
kubectl get all -A
Expected behavior:
Everything Running or Completed.
Actual behavior:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-7f496c8d7d-vt5xw 0/1 ContainerCreating 0 38m
kube-system pod/helm-install-traefik-6jgds 0/1 ContainerCreating 0 38m
kube-system pod/helm-install-traefik-crd-g7479 0/1 ContainerCreating 0 38m
kube-system pod/local-path-provisioner-578895bd58-j24pg 0/1 ContainerCreating 0 38m
kube-system pod/metrics-server-7b9c9c4b9c-8p7sw 0/1 ContainerCreating 0 38m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 38m
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 38m
kube-system service/metrics-server ClusterIP 10.43.88.179 <none> 443/TCP 38m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/1 1 0 38m
kube-system deployment.apps/local-path-provisioner 0/1 1 0 38m
kube-system deployment.apps/metrics-server 0/1 1 0 38m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-7f496c8d7d 1 1 0 38m
kube-system replicaset.apps/local-path-provisioner-578895bd58 1 1 0 38m
kube-system replicaset.apps/metrics-server-7b9c9c4b9c 1 1 0 38m
NAMESPACE NAME STATUS COMPLETIONS DURATION AGE
kube-system job.batch/helm-install-traefik Running 0/1 38m 38m
kube-system job.batch/helm-install-traefik-crd Running 0/1 38m 38m
Additional context / logs:
The /var/lib/rancher/k3s/agent/etc/cni/net.d only contains 10-flannel.conflist with
{
"name":"cbr0",
"cniVersion":"1.0.0",
"plugins":[
{
"type":"flannel",
"delegate":{
"hairpinMode":true,
"forceAddress":true,
"isDefaultGateway":true
}
},
{
"type":"portmap",
"capabilities":{
"portMappings":true
}
},
{
"type":"bandwidth",
"capabilities":{
"bandwidth":true
}
}
]
}
Checking sudo journalctl -l, the most interesting bit seems the segmentation fault when a bridge is being configured (?) out of that flannel:
crio[1485]: map[string]interface {}{"cniVersion":"1.0.0", "forceAddress":true, "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.42.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0x2a, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000010c10), "name":"cbr0", "type":"bridge"}
crio[1485]: delegateAdd: netconf sent to delegate plugin:
crio[1485]: {"cniVersion":"1.0.0","forceAddress":true,"hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"routes":[{"dst":"10.42.0.0/16"}]," type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-13T17:52:01.272640014Z" level=info msg="Got pod network &{Name:helm-install-traefik-crd-g7479 Namespace:kube-system ID:7cd440bbe2b44ca1f08683bb76cfedddfb4efe1dbca1971e0460bf4c0f9a09c3 UID:5f67db33-5ccd-48c3-b3a2-6ae1a638e2fd NetNS:/var/run/netns/0c39c28d-b2b7-47aa-b58b-d1e0109365d6 Networks:[{Name:cbr0 Ifname:eth0}] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:kubepods-besteffort-pod5f67db33_5ccd_48c3_b3a2_6ae1a638e2fd.slice PodAnnotations:0xc00012bd18}] Aliases:map[]}"
crio[1485]: time="2025-12-13T17:52:01.272845473Z" level=info msg="Checking pod kube-system_helm-install-traefik-crd-g7479 for CNI network cbr0 (type=flannel)"
crio[1485]: time="2025-12-13T17:52:01.344908812Z" level=info msg="Got pod network &{Name:metrics-server-7b9c9c4b9c-8p7sw Namespace:kube-system ID:9faca69ae03bef60900946ed91246fb154543b36295df1508e170ff42773ce45 UID:3a33a87e-5acb-4333-a8ec-22fd42b3f7e5 NetNS:/var/run/netns/e2864051-cd11-413e-8b33-8227aeb7276a Networks:[{Name:cbr0 Ifname:eth0}] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:kubepods-burstable-pod3a33a87e_5acb_4333_a8ec_22fd42b3f7e5.slice PodAnnotations:0xc00007ce68}] Aliases:map[]}"
crio[1485]: time="2025-12-13T17:52:01.345600442Z" level=info msg="Adding pod kube-system_metrics-server-7b9c9c4b9c-8p7sw to CNI network \"cbr0\" (type=flannel)"
systemd-networkd[491]: vethfb35c369: Link UP
kernel: cni0: port 3(vethfb35c369) entered blocking state
kernel: cni0: port 3(vethfb35c369) entered disabled state
kernel: vethfb35c369: entered allmulticast mode
kernel: vethfb35c369: entered promiscuous mode
kernel: cni0: port 3(vethfb35c369) entered blocking state
kernel: cni0: port 3(vethfb35c369) entered forwarding state
systemd-networkd[491]: vethfb35c369: Gained carrier
crio[1485]: map[string]interface {}{"cniVersion":"1.0.0", "forceAddress":true, "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.42.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0x2a, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000082bc0), "name":"cbr0", "type":"bridge"}
crio[1485]: delegateAdd: netconf sent to delegate plugin:
crio[1485]: {"cniVersion":"1.0.0","forceAddress":true,"hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"routes":[{"dst":"10.42.0.0/16"}]," type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-13T17:52:01.434036291Z" level=info msg="Got pod network &{Name:metrics-server-7b9c9c4b9c-8p7sw Namespace:kube-system ID:9faca69ae03bef60900946ed91246fb154543b36295df1508e170ff42773ce45 UID:3a33a87e-5acb-4333-a8ec-22fd42b3f7e5 NetNS:/var/run/netns/e2864051-cd11-413e-8b33-8227aeb7276a Networks:[{Name:cbr0 Ifname:eth0}] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:kubepods-burstable-pod3a33a87e_5acb_4333_a8ec_22fd42b3f7e5.slice PodAnnotations:0xc00007ce68}] Aliases:map[]}"
crio[1485]: time="2025-12-13T17:52:01.434201093Z" level=info msg="Checking pod kube-system_metrics-server-7b9c9c4b9c-8p7sw for CNI network cbr0 (type=flannel)"
crio[1485]: time="2025-12-13T17:52:01.467086704Z" level=info msg="Checking CNI network cbr0 (config version=1.0.0)"
crio[1485]: time="2025-12-13T17:52:01.467715599Z" level=error msg="Error checking network: netplugin failed: \"panic: runtime error: invalid memory address or nil pointer dereference\\n[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x5f89cc]\\n\\ngoroutine 1 gp=0xc000002380 m=0 mp=0x8da300 [running, locked to thread]:\\npanic({0x6763e0?, 0x8cd930?})\\n\\t/usr/local/go/src/runtime/panic.go:811 +0x168 fp=0xc0000d3950 sp=0xc0000d38a0 pc=0x46e668\\nruntime.panicmem(...)\\n\\t/usr/local/go/src/runtime/panic.go:262\\nruntime.sigpanic()\\n\\t/usr/local/go/src/runtime/signal_unix.go:925 +0x359 fp=0xc0000d39b0 sp=0xc0000d3950 pc=0x4705d9\\ngithub.com/containernetworking/plugins/plugins/meta/bandwidth.cmdCheck(0xc000096980)\\n\\t/tmp/tmp.SfBAFPHij0/src/github.com/containernetworking/plugins/plugins/meta/bandwidth/main.go:302 +0x2ac fp=0xc0000d3b18 sp=0xc0000d39b0 pc=0x5f89cc\\ngithub.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel.(*dispatcher).checkVersionAndCall(0xc0000a9db0, 0xc000096980, {0x72b288, 0xc000079d70}, 0x6de140)\\n\\t/tmp/tmp.SfBAFPHij0/src/github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel/skel.go:204 +0x116 fp=0xc0000d3bc0 sp=0xc0000d3b18 pc=0x576856\\ngithub.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel.(*dispatcher).pluginMain(0xc0000a9db0, {0x6de138, 0x6de148, 0x6de140, 0x0, 0x0}, {0x72b288, 0xc000079d70}, {0xc0000924e0, 0x20})\\n\\t/tmp/tmp.SfBAFPHij0/src/github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel/skel.go:273 +0xbc5 fp=0xc0000d3d18 sp=0xc0000d3bc0 pc=0x5777a5\\ngithub.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel.PluginMainFuncsWithError(...)\\n\\t/tmp/tmp.SfBAFPHij0/src/github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel/skel.go:394\\ngithub.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel.PluginMainFuncs({0x6de138, 0x6de148, 0x6de140, 0x0, 0x0}, {0x72b288?, 0xc000079d70?}, {0xc0000924e0?, 0x7ffdf5072c9a?})\\n\\t/tmp/tmp.SfBAFPHij0/src/github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel/skel.go:411 +0x13f fp=0xc0000d3e20 sp=0xc0000d3d18 pc=0x577cbf\\ngithub.com/containernetworking/plugins/plugins/meta/bandwidth.Main()\\n\\t/tmp/tmp.SfBAFPHij0/src/github.com/containernetworking/plugins/plugins/meta/bandwidth/main.go:244 +0x11f fp=0xc0000d3ed0 sp=0xc0000d3e20 pc=0x5f85bf\\ngithub.com/containernetworking/plugins/vendor/github.com/docker/docker/pkg/reexec.Init(...)\\n\\t/tmp/tmp.SfBAFPHij0/src/github.com/containernetworking/plugins/vendor/github.com/docker/docker/pkg/reexec/reexec.go:33\\nmain.main()\\n\\t/tmp/tmp.SfBAFPHij0/src/github.com/containernetworking/plugins/main_linux.go:34 +0x15c fp=0xc0000d3f50 sp=0xc0000d3ed0 pc=0x62b61c\\nruntime.main()\\n\\t/usr/local/go/src/runtime/proc.go:283 +0x28b fp=0xc0000d3fe0 sp=0xc0000d3f50 pc=0x43c78b\\nruntime.goexit({})\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc0000d3fe8 sp=0xc0000d3fe0 pc=0x475d01\\n\\ngoroutine 2 gp=0xc0000028c0 m=nil [force gc (idle)]:\\nruntime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)\\n\\t/usr/local/go/src/runtime/proc.go:435 +0xce fp=0xc00003cfa8 sp=0xc00003cf88 pc=0x46eb4e\\nruntime.goparkunlock(...)\\n\\t/usr/local/go/src/runtime/proc.go:441\\nruntime.forcegchelper()\\n\\t/usr/local/go/src/runtime/proc.go:348 +0xb3 fp=0xc00003cfe0 sp=0xc00003cfa8 pc=0x43cad3\\nruntime.goexit({})\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00003cfe8 sp=0xc00003cfe0 pc=0x475d01\\ncreated by runtime.init.7 in goroutine 1\\n\\t/usr/local/go/src/runtime/proc.go:336 +0x1a\\n\\ngoroutine 3 gp=0xc000002e00 m=nil [GC sweep wait]:\\nruntime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)\\n\\t/usr/local/go/src/runtime/proc.go:435 +0xce fp=0xc00003d780 sp=0xc00003d760 pc=0x46eb4e\\nruntime.goparkunlock(...)\\n\\t/usr/local/go/src/runtime/proc.go:441\\nruntime.bgsweep(0xc00005c000)\\n\\t/usr/local/go/src/runtime/mgcsweep.go:276 +0x94 fp=0xc00003d7c8 sp=0xc00003d780 pc=0x427634\\nruntime.gcenable.gowrap1()\\n\\t/usr/local/go/src/runtime/mgc.go:204 +0x25 fp=0xc00003d7e0 sp=0xc00003d7c8 pc=0x41bb05\\nruntime.goexit({})\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00003d7e8 sp=0xc00003d7e0 pc=0x475d01\\ncreated by runtime.gcenable in goroutine 1\\n\\t/usr/local/go/src/runtime/mgc.go:204 +0x66\\n\\ngoroutine 4 gp=0xc000002fc0 m=nil [GC scavenge wait]:\\nruntime.gopark(0xc00005c000?, 0x727b98?, 0x1?, 0x0?, 0xc000002fc0?)\\n\\t/usr/local/go/src/runtime/proc.go:435 +0xce fp=0xc00003df78 sp=0xc00003df58 pc=0x46eb4e\\nruntime.goparkunlock(...)\\n\\t/usr/local/go/src/runtime/proc.go:441\\nruntime.(*scavengerState).park(0x8d92e0)\\n\\t/usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00003dfa8 sp=0xc00003df78 pc=0x4250e9\\nruntime.bgscavenge(0xc00005c000)\\n\\t/usr/local/go/src/runtime/mgcscavenge.go:653 +0x3c fp=0xc00003dfc8 sp=0xc00003dfa8 pc=0x42565c\\nruntime.gcenable.gowrap2()\\n\\t/usr/local/go/src/runtime/mgc.go:205 +0x25 fp=0xc00003dfe0 sp=0xc00003dfc8 pc=0x41baa5\\nruntime.goexit({})\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00003dfe8 sp=0xc00003dfe0 pc=0x475d01\\ncreated by runtime.gcenable in goroutine 1\\n\\t/usr/local/go/src/runtime/mgc.go:205 +0xa5\\n\\ngoroutine 5 gp=0xc000003500 m=nil [finalizer wait]:\\nruntime.gopark(0x8f9e40?, 0x490013?, 0x78?, 0xc6?, 0x413dde?)\\n\\t/usr/local/go/src/runtime/proc.go:435 +0xce fp=0xc00003c630 sp=0xc00003c610 pc=0x46eb4e\\nruntime.runfinq()\\n\\t/usr/local/go/src/runtime/mfinal.go:196 +0x107 fp=0xc00003c7e0 sp=0xc00003c630 pc=0x41aac7\\nruntime.goexit({})\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00003c7e8 sp=0xc00003c7e0 pc=0x475d01\\ncreated by runtime.createfing in goroutine 1\\n\\t/usr/local/go/src/runtime/mfinal.go:166 +0x3d\\n\\ngoroutine 6 gp=0xc0000036c0 m=nil [chan receive]:\\nruntime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)\\n\\t/usr/local/go/src/runtime/proc.go:435 +0xce fp=0xc00003e718 sp=0xc00003e6f8 pc=0x46eb4e\\nruntime.chanrecv(0xc00006c0e0, 0x0, 0x1)\\n\\t/usr/local/go/src/runtime/chan.go:664 +0x445 fp=0xc00003e790 sp=0xc00003e718 pc=0x40d445\\nruntime.chanrecv1(0x0?, 0x0?)\\n\\t/usr/local/go/src/runtime/chan.go:506 +0x12 fp=0xc00003e7b8 sp=0xc00003e790 pc=0x40cff2\\nruntime.unique_runtime_registerUniqueMapCleanup.func2(...)\\n\\t/usr/local/go/src/runtime/mgc.go:1797\\nruntime.unique_runtime_registerUniqueMapCleanup.gowrap1()\\n\\t/usr/local/go/src/runtime/mgc.go:1800 +0x2f fp=0xc00003e7e0 sp=0xc00003e7b8 pc=0x41ec4f\\nruntime.goexit({})\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00003e7e8 sp=0xc00003e7e0 pc=0x475d01\\ncreated by unique.runtime_registerUniqueMapCleanup in goroutine 1\\n\\t/usr/local/go/src/runtime/mgc.go:1795 +0x79\\n\""
crio[1485]: time="2025-12-13T17:52:01.468783962Z" level=info msg="NetworkStart: stopping network for sandbox e2c13c06c631a289f9198d75a716ea4bb104f6c4bb4e0e812d7e69c5bcb27068" id=ae38ebc2-e9bd-4a85-873a-e82fe0295fe7 name=/runtime.v1.RuntimeService/RunPodSandbox
crio[1485]: time="2025-12-13T17:52:01.46900883Z" level=info msg="Got pod network &{Name:coredns-7f496c8d7d-vt5xw Namespace:kube-system ID:e2c13c06c631a289f9198d75a716ea4bb104f6c4bb4e0e812d7e69c5bcb27068 UID:aef26216-fa8f-472a-8c77-0b59e017db69 NetNS:/var/run/netns/50be8d3a-d8a4-462a-98f7-3b1575c803fa Networks:[{Name:cbr0 Ifname:eth0}] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:kubepods-burstable-podaef26216_fa8f_472a_8c77_0b59e017db69.slice PodAnnotations:0xc000b90318}] Aliases:map[]}"
crio[1485]: time="2025-12-13T17:52:01.46912056Z" level=info msg="Deleting pod kube-system_coredns-7f496c8d7d-vt5xw from CNI network \"cbr0\" (type=flannel)"
The segfault is reported by the crio process but I assume it is coming from K3s' plugin.
The network configuration ends up being
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:27:f8:17 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.229/24 metric 1024 brd 192.168.122.255 scope global dynamic enp1s0
valid_lft 2391sec preferred_lft 2391sec
inet6 fe80::5054:ff:fe27:f817/64 scope link
valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 02:94:ce:f7:83:ed brd ff:ff:ff:ff:ff:ff
inet 10.42.0.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::94:ceff:fef7:83ed/64 scope link
valid_lft forever preferred_lft forever
4: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 82:d9:69:01:cc:f1 brd ff:ff:ff:ff:ff:ff
inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::80d9:69ff:fe01:ccf1/64 scope link
valid_lft forever preferred_lft forever
Metadata
Metadata
Assignees
Labels
Type
Projects
Status