-
-
Notifications
You must be signed in to change notification settings - Fork 512
Open
Labels
bugSomething isn't workingSomething isn't working
Description
What did you do
-
How was the cluster created?
- `k3d cluster create dev --agents 3 --registry-config .\registries.yaml --api-port 0.0.0.0:6550 --k3s-arg "--kube-proxy-arg=conntrack-max-per-core=0@server:" --k3s-arg "--kube-proxy-arg=conntrack-max-per-core=0@agent:"
-
What did you do afterwards?
kubectl cluster-info
What did you expect to happen
kubectl cluster-info works
Screenshots or terminal output
(base) PS D:\k3d> kubectl cluster-info
E1114 22:40:57.603167 14652 memcache.go:265] couldn't get current server API group list: Get "https://host.docker.internal:6550/api?timeout=32s": EOF
E1114 22:41:20.305007 14652 memcache.go:265] couldn't get current server API group list: Get "https://host.docker.internal:6550/api?timeout=32s": EOF
And this is the logs of k3d-dev-server-0
k3d-dev-server-0.log
And this is the logs of k3d-dev-agent-0
2025-11-14 22:36:44 time="2025-11-14T14:36:44Z" level=info msg="Starting k3s agent v1.31.5+k3s1 (56ec5dd4)"
2025-11-14 22:36:44 time="2025-11-14T14:36:44Z" level=info msg="Updated load balancer k3s-agent-load-balancer default server: k3d-dev-server-0:6443"
2025-11-14 22:36:44 time="2025-11-14T14:36:44Z" level=info msg="Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 -> [] [default: k3d-dev-server-0:6443]"
2025-11-14 22:36:44 E1114 14:36:44.910070 88 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:44 E1114 14:36:44.911973 88 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:44 E1114 14:36:44.913720 88 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:44 E1114 14:36:44.915588 88 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:44 The connection to the server localhost:8080 was refused - did you specify the right host or port?
2025-11-14 22:36:44 time="2025-11-14T14:36:44Z" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=info msg="Using private registry config file at /etc/rancher/k3s/registries.yaml"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=info msg="Module overlay was already loaded"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=info msg="Module nf_conntrack was already loaded"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=warning msg="Failed to load kernel module br_netfilter with modprobe"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=warning msg="Failed to load kernel module iptable_filter with modprobe"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=warning msg="Failed to load kernel module nft-expr-counter with modprobe"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=warning msg="Failed to load kernel module nfnetlink-subsys-11 with modprobe"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=warning msg="Failed to load kernel module nft-chain-2-nat with modprobe"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
2025-11-14 22:36:45 time="2025-11-14T14:36:45Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="containerd is now running"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Getting list of apiserver endpoints from server"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Got apiserver addresses from supervisor: [172.18.0.3:6443]"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Adding server to load balancer k3s-agent-load-balancer: 172.18.0.3:6443"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Updated load balancer k3s-agent-load-balancer server addresses -> [172.18.0.3:6443] [default: k3d-dev-server-0:6443]"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Connecting to proxy" url="wss://172.18.0.3:6443/v1-k3s/connect"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Creating k3s-cert-monitor event broadcaster"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-dev-agent-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-ip=172.18.0.6 --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Remotedialer connected to proxy" url="wss://172.18.0.3:6443/v1-k3s/connect"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Server 172.18.0.3:6443@UNCHECKED->RECOVERING from successful dial"
2025-11-14 22:36:46 Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
2025-11-14 22:36:46 I1114 14:36:46.900884 87 server.go:486] "Kubelet version" kubeletVersion="v1.31.5+k3s1"
2025-11-14 22:36:46 I1114 14:36:46.900916 87 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
2025-11-14 22:36:46 I1114 14:36:46.903416 87 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Server 172.18.0.3:6443@RECOVERING->ACTIVE from successful health check"
2025-11-14 22:36:46 time="2025-11-14T14:36:46Z" level=info msg="Closing 2 connections to load balancer server k3d-dev-server-0:6443@STANDBY*"
2025-11-14 22:36:46 E1114 14:36:46.906545 87 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = method RuntimeConfig not implemented"
2025-11-14 22:36:46 I1114 14:36:46.906577 87 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
2025-11-14 22:36:46 E1114 14:36:46.921124 87 info.go:119] Failed to get system UUID: open /etc/machine-id: no such file or directory
2025-11-14 22:36:46 W1114 14:36:46.921898 87 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
2025-11-14 22:36:46 I1114 14:36:46.922287 87 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
2025-11-14 22:36:46 I1114 14:36:46.922336 87 server.go:812] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
2025-11-14 22:36:46 I1114 14:36:46.922682 87 swap_util.go:113] "Swap is on" /proc/swaps contents=<
2025-11-14 22:36:46 Filename Type Size Used Priority
2025-11-14 22:36:46 /dev/sdb partition 4194304 5060 -2
2025-11-14 22:36:46 >
2025-11-14 22:36:46 E1114 14:36:46.924265 87 mount_linux.go:282] Mount failed: exit status 255
2025-11-14 22:36:46 Mounting command: mount
2025-11-14 22:36:46 Mounting arguments: -t tmpfs -o noswap tmpfs /var/lib/kubelet/tmpfs-noswap-test-900094042
2025-11-14 22:36:46 Output: mount: mounting tmpfs on /var/lib/kubelet/tmpfs-noswap-test-900094042 failed: Invalid argument
2025-11-14 22:36:46
2025-11-14 22:36:46 I1114 14:36:46.924312 87 swap_util.go:87] "error mounting tmpfs with the noswap option. Assuming not supported" error=<
2025-11-14 22:36:46 mount failed: exit status 255
2025-11-14 22:36:46 Mounting command: mount
2025-11-14 22:36:46 Mounting arguments: -t tmpfs -o noswap tmpfs /var/lib/kubelet/tmpfs-noswap-test-900094042
2025-11-14 22:36:46 Output: mount: mounting tmpfs on /var/lib/kubelet/tmpfs-noswap-test-900094042 failed: Invalid argument
2025-11-14 22:36:46 >
2025-11-14 22:36:46 I1114 14:36:46.924603 87 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
2025-11-14 22:36:46 I1114 14:36:46.924663 87 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"k3d-dev-agent-0","RuntimeCgroupsName":"/k3s","SystemCgroupsName":"","KubeletCgroupsName":"/k3s","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
2025-11-14 22:36:46 I1114 14:36:46.924890 87 topology_manager.go:138] "Creating topology manager with none policy"
2025-11-14 22:36:46 I1114 14:36:46.924901 87 container_manager_linux.go:300] "Creating device plugin manager"
2025-11-14 22:36:46 I1114 14:36:46.925267 87 state_mem.go:36] "Initialized new in-memory state store"
2025-11-14 22:36:46 I1114 14:36:46.925442 87 kubelet.go:408] "Attempting to sync node with API server"
2025-11-14 22:36:46 I1114 14:36:46.925511 87 kubelet.go:303] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
2025-11-14 22:36:46 I1114 14:36:46.925533 87 kubelet.go:314] "Adding apiserver pod source"
2025-11-14 22:36:46 I1114 14:36:46.925551 87 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
2025-11-14 22:36:46 I1114 14:36:46.926338 87 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23-k3s2" apiVersion="v1"
2025-11-14 22:36:46 I1114 14:36:46.927006 87 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
2025-11-14 22:36:46 W1114 14:36:46.927086 87 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
2025-11-14 22:36:46 I1114 14:36:46.928633 87 server.go:1269] "Started kubelet"
2025-11-14 22:36:46 I1114 14:36:46.928873 87 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
2025-11-14 22:36:46 I1114 14:36:46.928957 87 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
2025-11-14 22:36:46 I1114 14:36:46.930950 87 server.go:460] "Adding debug handlers to kubelet server"
2025-11-14 22:36:46 I1114 14:36:46.931144 87 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
2025-11-14 22:36:46 I1114 14:36:46.931944 87 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
2025-11-14 22:36:46 I1114 14:36:46.932094 87 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/rancher/k3s/agent/serving-kubelet.crt::/var/lib/rancher/k3s/agent/serving-kubelet.key"
2025-11-14 22:36:46 I1114 14:36:46.932142 87 volume_manager.go:289] "Starting Kubelet Volume Manager"
2025-11-14 22:36:46 I1114 14:36:46.932216 87 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
2025-11-14 22:36:46 I1114 14:36:46.932298 87 reconciler.go:26] "Reconciler: start to sync state"
2025-11-14 22:36:46 E1114 14:36:46.932689 87 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"k3d-dev-agent-0\" not found"
2025-11-14 22:36:46 I1114 14:36:46.936303 87 factory.go:221] Registration of the systemd container factory successfully
2025-11-14 22:36:46 I1114 14:36:46.936419 87 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
2025-11-14 22:36:46 E1114 14:36:46.937987 87 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
2025-11-14 22:36:46 I1114 14:36:46.938059 87 factory.go:221] Registration of the containerd container factory successfully
2025-11-14 22:36:46 I1114 14:36:46.943794 87 cpu_manager.go:214] "Starting CPU manager" policy="none"
2025-11-14 22:36:46 I1114 14:36:46.943817 87 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
2025-11-14 22:36:46 I1114 14:36:46.943840 87 state_mem.go:36] "Initialized new in-memory state store"
2025-11-14 22:36:46 E1114 14:36:46.943795 87 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"k3d-dev-agent-0\" not found" node="k3d-dev-agent-0"
2025-11-14 22:36:46 I1114 14:36:46.945780 87 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
2025-11-14 22:36:46 I1114 14:36:46.947524 87 policy_none.go:49] "None policy: Start"
2025-11-14 22:36:46 I1114 14:36:46.948702 87 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
2025-11-14 22:36:46 I1114 14:36:46.948754 87 status_manager.go:217] "Starting to sync pod status with apiserver"
2025-11-14 22:36:46 I1114 14:36:46.948778 87 kubelet.go:2321] "Starting kubelet main sync loop"
2025-11-14 22:36:46 E1114 14:36:46.948835 87 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
2025-11-14 22:36:46 I1114 14:36:46.949383 87 memory_manager.go:170] "Starting memorymanager" policy="None"
2025-11-14 22:36:46 I1114 14:36:46.949470 87 state_mem.go:35] "Initializing new in-memory state store"
2025-11-14 22:36:46 I1114 14:36:46.954354 87 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
2025-11-14 22:36:46 I1114 14:36:46.955613 87 eviction_manager.go:189] "Eviction manager: starting control loop"
2025-11-14 22:36:46 I1114 14:36:46.955658 87 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
2025-11-14 22:36:46 I1114 14:36:46.955870 87 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
2025-11-14 22:36:46 E1114 14:36:46.957083 87 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"k3d-dev-agent-0\" not found"
2025-11-14 22:36:47 I1114 14:36:47.058164 87 kubelet_node_status.go:72] "Attempting to register node" node="k3d-dev-agent-0"
2025-11-14 22:36:47 I1114 14:36:47.062815 87 kubelet_node_status.go:75] "Successfully registered node" node="k3d-dev-agent-0"
2025-11-14 22:36:47 E1114 14:36:47.062841 87 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"k3d-dev-agent-0\": node \"k3d-dev-agent-0\" not found"
2025-11-14 22:36:47 time="2025-11-14T14:36:47Z" level=info msg="Annotations and labels have been set successfully on node: k3d-dev-agent-0"
2025-11-14 22:36:47 time="2025-11-14T14:36:47Z" level=info msg="Starting flannel with backend vxlan"
2025-11-14 22:36:47 time="2025-11-14T14:36:47Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-dev-agent-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
2025-11-14 22:36:47 I1114 14:36:47.430087 87 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.18.0.6"]
2025-11-14 22:36:47 E1114 14:36:47.430169 87 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
2025-11-14 22:36:47 I1114 14:36:47.432526 87 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
2025-11-14 22:36:47 I1114 14:36:47.432577 87 server_linux.go:169] "Using iptables Proxier"
2025-11-14 22:36:47 I1114 14:36:47.433622 87 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
2025-11-14 22:36:47 E1114 14:36:47.440302 87 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available"
2025-11-14 22:36:47 E1114 14:36:47.446892 87 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available"
2025-11-14 22:36:47 I1114 14:36:47.446999 87 server.go:483] "Version info" version="v1.31.5+k3s1"
2025-11-14 22:36:47 I1114 14:36:47.447026 87 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
2025-11-14 22:36:47 I1114 14:36:47.447726 87 config.go:199] "Starting service config controller"
2025-11-14 22:36:47 I1114 14:36:47.447767 87 shared_informer.go:313] Waiting for caches to sync for service config
2025-11-14 22:36:47 I1114 14:36:47.447733 87 config.go:105] "Starting endpoint slice config controller"
2025-11-14 22:36:47 I1114 14:36:47.447903 87 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
2025-11-14 22:36:47 I1114 14:36:47.447934 87 config.go:328] "Starting node config controller"
2025-11-14 22:36:47 I1114 14:36:47.447948 87 shared_informer.go:313] Waiting for caches to sync for node config
2025-11-14 22:36:47 I1114 14:36:47.516492 87 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
2025-11-14 22:36:47 I1114 14:36:47.548482 87 shared_informer.go:320] Caches are synced for service config
2025-11-14 22:36:47 I1114 14:36:47.548632 87 shared_informer.go:320] Caches are synced for node config
2025-11-14 22:36:47 I1114 14:36:47.548667 87 shared_informer.go:320] Caches are synced for endpoint slice config
2025-11-14 22:36:47 time="2025-11-14T14:36:47Z" level=info msg="Tunnel authorizer set Kubelet Port 0.0.0.0:10250"
2025-11-14 22:36:47 I1114 14:36:47.926836 87 apiserver.go:52] "Watching apiserver"
2025-11-14 22:36:47 I1114 14:36:47.932571 87 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
2025-11-14 22:36:48 E1114 14:36:48.046929 268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:48 E1114 14:36:48.049090 268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:48 E1114 14:36:48.051387 268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:48 E1114 14:36:48.053533 268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:48 The connection to the server localhost:8080 was refused - did you specify the right host or port?
2025-11-14 22:36:48 time="2025-11-14T14:36:48Z" level=info msg="Flannel found PodCIDR assigned for node k3d-dev-agent-0"
2025-11-14 22:36:48 time="2025-11-14T14:36:48Z" level=info msg="The interface eth0 with ipv4 address 172.18.0.6 will be used by flannel"
2025-11-14 22:36:48 I1114 14:36:48.914117 87 kube.go:139] Waiting 10m0s for node controller to sync
2025-11-14 22:36:48 I1114 14:36:48.914160 87 kube.go:469] Starting kube subnet manager
2025-11-14 22:36:48 time="2025-11-14T14:36:48Z" level=info msg="Starting network policy controller version v2.2.1, built on 2025-01-28T17:45:21Z, go1.22.10"
2025-11-14 22:36:48 I1114 14:36:48.973082 87 network_policy_controller.go:164] Starting network policy controller
2025-11-14 22:36:48 I1114 14:36:48.995336 87 network_policy_controller.go:176] Starting network policy controller full sync goroutine
2025-11-14 22:36:49 I1114 14:36:49.750903 87 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmsk\" (UniqueName: \"kubernetes.io/projected/d8c7c1b4-1610-4f57-8f0f-4c85c0aa0ed6-kube-api-access-slmsk\") pod \"local-path-provisioner-5cf85fd84d-pwfxw\" (UID: \"d8c7c1b4-1610-4f57-8f0f-4c85c0aa0ed6\") " pod="kube-system/local-path-provisioner-5cf85fd84d-pwfxw"
2025-11-14 22:36:49 I1114 14:36:49.751032 87 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8c7c1b4-1610-4f57-8f0f-4c85c0aa0ed6-config-volume\") pod \"local-path-provisioner-5cf85fd84d-pwfxw\" (UID: \"d8c7c1b4-1610-4f57-8f0f-4c85c0aa0ed6\") " pod="kube-system/local-path-provisioner-5cf85fd84d-pwfxw"
2025-11-14 22:36:49 I1114 14:36:49.895955 87 kube.go:490] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.3.0/24]
2025-11-14 22:36:49 I1114 14:36:49.915253 87 kube.go:146] Node controller sync successful
2025-11-14 22:36:49 I1114 14:36:49.915340 87 vxlan.go:141] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
2025-11-14 22:36:49 I1114 14:36:49.917716 87 kube.go:636] List of node(k3d-dev-agent-0) annotations: map[string]string{"alpha.kubernetes.io/provided-node-ip":"172.18.0.6", "k3s.io/hostname":"k3d-dev-agent-0", "k3s.io/internal-ip":"172.18.0.6", "k3s.io/node-args":"[\"agent\",\"--kube-proxy-arg\",\"conntrack-max-per-core=0\"]", "k3s.io/node-config-hash":"NA2KQKCJSNS4JPADRIKPLJTJFFA37QKUEB4CVGSM6BKGHBZFJYUA====", "k3s.io/node-env":"{\"K3S_KUBECONFIG_OUTPUT\":\"/output/kubeconfig.yaml\",\"K3S_TOKEN\":\"********\",\"K3S_URL\":\"https://k3d-dev-server-0:6443\"}", "node.alpha.kubernetes.io/ttl":"0", "volumes.kubernetes.io/controller-managed-attach-detach":"true"}
2025-11-14 22:36:49 I1114 14:36:49.987091 87 kube.go:490] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.2.0/24]
2025-11-14 22:36:50 I1114 14:36:50.017462 87 kube.go:490] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.1.0/24]
2025-11-14 22:36:50 I1114 14:36:50.231279 87 kube.go:490] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.0.0/24]
2025-11-14 22:36:50 I1114 14:36:50.388512 87 iptables.go:51] Starting flannel in iptables mode...
2025-11-14 22:36:50 time="2025-11-14T14:36:50Z" level=warning msg="no subnet found for key: FLANNEL_NETWORK in file: /run/flannel/subnet.env"
2025-11-14 22:36:50 time="2025-11-14T14:36:50Z" level=warning msg="no subnet found for key: FLANNEL_SUBNET in file: /run/flannel/subnet.env"
2025-11-14 22:36:50 time="2025-11-14T14:36:50Z" level=warning msg="no subnet found for key: FLANNEL_IPV6_NETWORK in file: /run/flannel/subnet.env"
2025-11-14 22:36:50 time="2025-11-14T14:36:50Z" level=warning msg="no subnet found for key: FLANNEL_IPV6_SUBNET in file: /run/flannel/subnet.env"
2025-11-14 22:36:50 I1114 14:36:50.388595 87 iptables.go:115] Current network or subnet (10.42.0.0/16, 10.42.0.0/24) is not equal to previous one (0.0.0.0/0, 0.0.0.0/0), trying to recycle old iptables rules
2025-11-14 22:36:50 I1114 14:36:50.394725 87 iptables.go:125] Setting up masking rules
2025-11-14 22:36:50 I1114 14:36:50.396493 87 iptables.go:226] Changing default FORWARD chain policy to ACCEPT
2025-11-14 22:36:50 time="2025-11-14T14:36:50Z" level=info msg="Wrote flannel subnet file to /run/flannel/subnet.env"
2025-11-14 22:36:50 time="2025-11-14T14:36:50Z" level=info msg="Running flannel backend."
2025-11-14 22:36:50 I1114 14:36:50.397799 87 vxlan_network.go:65] watching for new subnet leases
2025-11-14 22:36:50 I1114 14:36:50.397837 87 subnet.go:152] Batch elem [0] is { lease.Event{Type:0, Lease:lease.Lease{EnableIPv4:true, EnableIPv6:false, Subnet:ip.IP4Net{IP:0xa2a0300, PrefixLen:0x18}, IPv6Subnet:ip.IP6Net{IP:(*ip.IP6)(nil), PrefixLen:0x0}, Attrs:lease.LeaseAttrs{PublicIP:0xac120003, PublicIPv6:(*ip.IP6)(nil), BackendType:"vxlan", BackendData:json.RawMessage{0x7b, 0x22, 0x56, 0x4e, 0x49, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x56, 0x74, 0x65, 0x70, 0x4d, 0x41, 0x43, 0x22, 0x3a, 0x22, 0x34, 0x32, 0x3a, 0x35, 0x35, 0x3a, 0x66, 0x32, 0x3a, 0x35, 0x33, 0x3a, 0x37, 0x30, 0x3a, 0x64, 0x39, 0x22, 0x7d}, BackendV6Data:json.RawMessage(nil)}, Expiration:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Asof:0}} }
2025-11-14 22:36:50 I1114 14:36:50.397925 87 subnet.go:152] Batch elem [0] is { lease.Event{Type:0, Lease:lease.Lease{EnableIPv4:true, EnableIPv6:false, Subnet:ip.IP4Net{IP:0xa2a0200, PrefixLen:0x18}, IPv6Subnet:ip.IP6Net{IP:(*ip.IP6)(nil), PrefixLen:0x0}, Attrs:lease.LeaseAttrs{PublicIP:0xac120005, PublicIPv6:(*ip.IP6)(nil), BackendType:"vxlan", BackendData:json.RawMessage{0x7b, 0x22, 0x56, 0x4e, 0x49, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x56, 0x74, 0x65, 0x70, 0x4d, 0x41, 0x43, 0x22, 0x3a, 0x22, 0x61, 0x61, 0x3a, 0x64, 0x37, 0x3a, 0x38, 0x35, 0x3a, 0x63, 0x39, 0x3a, 0x61, 0x62, 0x3a, 0x37, 0x62, 0x22, 0x7d}, BackendV6Data:json.RawMessage(nil)}, Expiration:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Asof:0}} }
2025-11-14 22:36:50 I1114 14:36:50.397973 87 vxlan_network.go:100] Received Subnet Event with VxLan: BackendType: vxlan, PublicIP: 172.18.0.3, PublicIPv6: (nil), BackendData: {"VNI":1,"VtepMAC":"42:55:f2:53:70:d9"}, BackendV6Data: (nil)
2025-11-14 22:36:50 I1114 14:36:50.403434 87 iptables.go:372] bootstrap done
2025-11-14 22:36:50 I1114 14:36:50.407801 87 iptables.go:372] bootstrap done
2025-11-14 22:36:50 I1114 14:36:50.448932 87 vxlan_network.go:100] Received Subnet Event with VxLan: BackendType: vxlan, PublicIP: 172.18.0.5, PublicIPv6: (nil), BackendData: {"VNI":1,"VtepMAC":"aa:d7:85:c9:ab:7b"}, BackendV6Data: (nil)
2025-11-14 22:36:50 I1114 14:36:50.449049 87 subnet.go:152] Batch elem [0] is { lease.Event{Type:0, Lease:lease.Lease{EnableIPv4:true, EnableIPv6:false, Subnet:ip.IP4Net{IP:0xa2a0100, PrefixLen:0x18}, IPv6Subnet:ip.IP6Net{IP:(*ip.IP6)(nil), PrefixLen:0x0}, Attrs:lease.LeaseAttrs{PublicIP:0xac120004, PublicIPv6:(*ip.IP6)(nil), BackendType:"vxlan", BackendData:json.RawMessage{0x7b, 0x22, 0x56, 0x4e, 0x49, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x56, 0x74, 0x65, 0x70, 0x4d, 0x41, 0x43, 0x22, 0x3a, 0x22, 0x38, 0x36, 0x3a, 0x63, 0x61, 0x3a, 0x35, 0x37, 0x3a, 0x32, 0x31, 0x3a, 0x32, 0x30, 0x3a, 0x38, 0x38, 0x22, 0x7d}, BackendV6Data:json.RawMessage(nil)}, Expiration:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Asof:0}} }
2025-11-14 22:36:50 I1114 14:36:50.509816 87 vxlan_network.go:100] Received Subnet Event with VxLan: BackendType: vxlan, PublicIP: 172.18.0.4, PublicIPv6: (nil), BackendData: {"VNI":1,"VtepMAC":"86:ca:57:21:20:88"}, BackendV6Data: (nil)
2025-11-14 22:36:51 E1114 14:36:51.174859 501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:51 E1114 14:36:51.176796 501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:51 E1114 14:36:51.178506 501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:51 E1114 14:36:51.180270 501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
2025-11-14 22:36:51 The connection to the server localhost:8080 was refused - did you specify the right host or port?
Which OS & Architecture
- output of
k3d runtime-info
arch: x86_64
cgroupdriver: cgroupfs
cgroupversion: "2"
endpoint: /var/run/docker.sock
filesystem: UNKNOWN
infoname: docker-desktop
name: docker
os: Docker Desktop
ostype: linux
version: 27.2.0
Which version of k3d
- output of
k3d version
k3d version v5.8.3
k3s version v1.31.5-k3s1 (default)
Which version of docker
- output of
docker versionanddocker info
Client:
Version: 27.2.0
API version: 1.47
Go version: go1.21.13
Git commit: 3ab4256
Built: Tue Aug 27 14:17:17 2024
OS/Arch: windows/amd64
Context: desktop-linux
Server: Docker Desktop 4.34.2 (167172)
Engine:
Version: 27.2.0
API version: 1.47 (minimum version 1.24)
Go version: go1.21.13
Git commit: 3ab5c7d
Built: Tue Aug 27 14:15:15 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.7.20
GitCommit: 8fc6bcff51318944179630522a095cc9dbf9f353
runc:
Version: 1.1.13
GitCommit: v1.1.13-0-g58aa920
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Client:
Version: 27.2.0
Context: desktop-linux
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.16.2-desktop.1
Path: C:\Program Files\Docker\cli-plugins\docker-buildx.exe
compose: Docker Compose (Docker Inc.)
Version: v2.29.2-desktop.2
Path: C:\Program Files\Docker\cli-plugins\docker-compose.exe
debug: Get a shell into any image or container (Docker Inc.)
Version: 0.0.34
Path: C:\Program Files\Docker\cli-plugins\docker-debug.exe
desktop: Docker Desktop commands (Alpha) (Docker Inc.)
Version: v0.0.15
Path: C:\Program Files\Docker\cli-plugins\docker-desktop.exe
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.2
Path: C:\Program Files\Docker\cli-plugins\docker-dev.exe
extension: Manages Docker extensions (Docker Inc.)
Version: v0.2.25
Path: C:\Program Files\Docker\cli-plugins\docker-extension.exe
feedback: Provide feedback, right in your terminal! (Docker Inc.)
Version: v1.0.5
Path: C:\Program Files\Docker\cli-plugins\docker-feedback.exe
init: Creates Docker-related starter files for your project (Docker Inc.)
Version: v1.3.0
Path: C:\Program Files\Docker\cli-plugins\docker-init.exe
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
Version: 0.6.0
Path: C:\Program Files\Docker\cli-plugins\docker-sbom.exe
scout: Docker Scout (Docker Inc.)
Version: v1.13.0
Path: C:\Program Files\Docker\cli-plugins\docker-scout.exe
Server:
Containers: 6
Running: 6
Paused: 0
Stopped: 0
Images: 3
Server Version: 27.2.0
Storage Driver: overlayfs
driver-type: io.containerd.snapshotter.v1
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 nvidia runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
seccomp
Profile: unconfined
cgroupns
Kernel Version: 5.15.153.1-microsoft-standard-WSL2
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 15.6GiB
Name: docker-desktop
ID: 7741db7d-3b89-4574-a67c-e0808e11ccc3
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Labels:
com.docker.desktop.address=npipe://\\.\pipe\docker_cli
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
127.0.0.0/8
Registry Mirrors:
https://dockerhub.proland.org.cn/
Live Restore Enabled: false
WARNING: daemon is not using the default seccomp profile
Here is the content of registries.yaml
x-auth: &auth
auth:
username: XXXXX
password: YYYYY
mirrors:
docker.io:
endpoint:
- https://dockerhub.XXXXX/v2
ghcr.io:
endpoint:
- https://ghcr.XXXXX/v2
gcr.io:
endpoint:
- https://gcr.XXXXX/v2
k8s.gcr.io:
endpoint:
- https://k8s-gcr.XXXXX/v2
registry.k8s.io:
endpoint:
- https://k8s.XXXXX/v2
quay.io:
endpoint:
- https://quay.XXXXX/v2
mcr.microsoft.com:
endpoint:
- https://mcr.XXXXX/v2
docker.elastic.co:
endpoint:
- https://elastic.XXXXX/v2
configs:
docker.io:
<<: *auth
ghcr.io:
<<: *auth
gcr.io:
<<: *auth
k8s.gcr.io:
<<: *auth
registry.k8s.io:
<<: *auth
quay.io:
<<: *auth
mcr.microsoft.com:
<<: *auth
docker.elastic.co:
<<: *auth
CallMeLaNN
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working