Open
Description
Kube-OVN Version
v1.12.19
Kubernetes Version
v1.28.11
Operation-system/Kernel Version
Ubuntu 22.04.4 LTS 5.15.0-117-generic
Description
I create a workload on my cluster, but the pod gets stuck on creating the container. When I check the event of the pod, this is because there is no allocated IP for the pod. But my subnet still has many IPs to allocate for the workload. The following are details:
Events:
Type Reason Age From Message
Warning AcquireAddressFailed 7m28s kube-ovn-controller AddressOutOfRange
Warning FailedCreatePodSandBox 9s kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8d8xxx": plugin type="multus" name="multus-cni-network" failed (add): [default/my-pod-xxx:generic-veth]: error adding container to network "generic-veth": plugin type="kube-ovn" failed (add): RPC failed; request ip return 500 no address allocated to pod default/my-pod-xxx provider ovn, please see kube-ovn-controller logs to find errors
On my kube-ovn-cni logs, I just got this information:
wait address for pod default/my-pod-xxx provider ovn
no address allocated to pod default/my-pod-xxx provider ovn, please see kube-ovn-controller logs to find errors
del port request: {kube-ovn my-pod-xxx default xxx /var/run/netns/cni-xxx eth0 ovn [] {[] [] []} }
Steps To Reproduce
- Creating a subnet with CIDR /24
- Create a pod
Current Behavior
Pod can't have an IP and is stuck on creating the container
Expected Behavior
Pod gets an IP and status is Running
Activity
dosubot commentedon Apr 6, 2025
Halo @ade-iqbal! Saya adalah bot yang siap membantu Anda menyelesaikan bug, menjawab pertanyaan, dan menjadi kontributor. Saya di sini untuk membantu Anda sambil menunggu pemelihara manusia. Mari kita atasi masalah ini bersama! 🚀
I found a similar closed issue titled "request ip return 500 no address allocated to pod $POD_NAME provider ovn, please see kube-ovn-controller logs to find errors" which was resolved by a pull request that replaced
StrategicMergePatchType
withMergePatchType
[1].Additionally, there is another closed issue titled "[BUG] wrong ippool allocation](#4687)" which was resolved by supporting multiple IPPools in the namespace [2].
You might want to check if the solution from these issues applies to your case, such as verifying the patch type used or considering the configuration of IPPools in your namespace.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
zhangzujian commentedon Apr 7, 2025
Please provide logs of the
kube-ovn-controller
pods.ade-iqbal commentedon Apr 7, 2025
There aren't logs
kube-ovn-controller
pods about this issue, only have logs for another podszhangzujian commentedon Apr 8, 2025
kube-ovn-controller
is responsible for subnet management and pod IPAM. We need its logs to see what happened. You can also check the subnet's status to see whether it is ready for use.ade-iqbal commentedon Apr 8, 2025
My subnet is already for use
I try to get logs from
kube-ovn-controller
, but nothing foundzhangzujian commentedon Apr 9, 2025
Probably there are several kube-ovn-controller pods and the one selected by
app=kube-ovn-controller
is not working as leader. Please attach ALL the logs of the leader pod.ade-iqbal commentedon Apr 9, 2025
I have 3 pods of
kube-ovn-controller
and check one by one, but there are any logs foundade-iqbal commentedon Apr 18, 2025
About this issue, the pod gets an IP address after several minutes to hours. And now I get the same problem, but still there aren't logs on
kube-ovn-controller
about the podoilbeater commentedon Apr 18, 2025
Can you run
kubectl get lease -n kube-system kube-ovn-controller
to see which pod hold the leader. I'm afraid something weird happened that some legacy pod hold the leader.ade-iqbal commentedon Apr 18, 2025
The leader exists, but there aren't logs about the pod in the leader controller