Skip to content

Flannel pod keeps restarting on new Kubernetes node, other pods like kube-proxy also crash repeatedly despite identical configuration #2292

@ptite-ratte

Description

@ptite-ratte

Hello,

I have a Kubernetes cluster where I recently added a new node (srvvkubernetes06). Flannel is used as the CNI across all nodes. However, on this new node, the Flannel pod is stuck in a crash loop (CrashLoopBackOff). Other pods on the same node, such as kube-proxy, also exhibit similar behavior — they start, then stop and restart repeatedly.

What I’ve checked so far:

The CNI config files (/etc/cni/net.d/10-flannel.conflist) are exactly the same on the new node and on a working node (srvvkubernetes05).

The vxlan kernel module is loaded on both nodes.

The flannel.1 network interface exists and is UP on both nodes, with node-specific IP addresses.

The Flannel pod’s init containers complete successfully, but the main container keeps restarting.

The pod is deployed via a DaemonSet with the same image version on all nodes.

Other system pods like kube-proxy on the new node also keep restarting, suggesting a broader node-related issue.

I suspect a network or node configuration problem specific to the new node but haven’t identified the root cause yet.

If anyone has faced similar issues or can suggest debugging steps, I would greatly appreciate it!

I can share full pod logs, network interface info, and Flannel config if needed.

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions