Description
What you expected to happen?
im using weave-net and the kubectl version is
Client Version: v1.27.4
Kustomize Version: v5.0.1
Server Version: v1.27.4
Containerd v1.7.3
Scenario: 1 master and 2 workers cluster
The weave-net has been installed by running below command on the master as described here :-
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
What happened?
The pods on worker node fail to start. Below is pod status and logs i get from the weave-net pod from failing pod
masterNode:~# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5d78c9869d-6xj4n 1/1 Running 3 (12h ago) 37d 10.244.1.3 masternode <none> <none>
coredns-5d78c9869d-95ks6 1/1 Running 3 (12h ago) 37d 10.244.1.4 masternode <none> <none>
etcd-masternode 1/1 Running 11 (12h ago) 37d 192.168.1.50 masternode <none> <none>
kube-apiserver-masternode 1/1 Running 12 (12h ago) 37d 192.168.1.50 masternode <none> <none>
kube-controller-manager-masternode 1/1 Running 27 (12h ago) 37d 192.168.1.50 masternode <none> <none>
kube-proxy-hcvwd 1/1 Running 285 (5m54s ago) 37h 192.168.1.111 nodeb <none> <none>
kube-proxy-p7nm2 1/1 Running 3 (12h ago) 37d 192.168.1.50 masternode <none> <none>
kube-proxy-t7mrh 0/1 CrashLoopBackOff 390 (54s ago) 37h 192.168.1.95 nodea <none> <none>
kube-scheduler-masternode 1/1 Running 28 (106m ago) 37d 192.168.1.50 masternode <none> <none>
weave-net-64rkh 2/2 Running 0 11h 192.168.1.50 masternode <none> <none>
weave-net-cggmf 0/2 CrashLoopBackOff 272 (115s ago) 11h 192.168.1.111 nodeb <none> <none>
weave-net-gps28 0/2 CrashLoopBackOff 274 (62s ago) 11h 192.168.1.95 nodea <none> <none>
Logs from failing weave-net-gps28
masterNode:~# kubectl logs weave-net-gps28 -n kube-system weave --follow
DEBU: 2023/09/26 05:20:35.618908 [kube-peers] Checking peer "c2:76:26:b7:8a:18" against list &{[{26:23:8d:cf:1e:49 masternode} {7a:dc:f6:95:9e:1c nodeb} {c2:76:26:b7:8a:18 nodea}]}
INFO: 2023/09/26 05:20:35.803225 Command line options: map[conn-limit:200 datapath:datapath db-prefix:/weavedb/weave-net docker-api: expect-npc:true http-addr:127.0.0.1:6784 ipalloc-init:consensus=2 ipalloc-range:10.32.0.0/12 metrics-addr:0.0.0.0:6782 name:c2:76:26:b7:8a:18 nickname:nodea no-dns:true no-masq-local:true port:6783]
INFO: 2023/09/26 05:20:35.803253 weave git-34de0b10a69c
INFO: 2023/09/26 05:20:36.145889 Re-exposing 10.244.1.192/24 on bridge "weave"
INFO: 2023/09/26 05:20:36.216791 Re-exposing 10.40.0.0/12 on bridge "weave"
INFO: 2023/09/26 05:20:36.263670 Re-exposing 10.244.1.128/24 on bridge "weave"
INFO: 2023/09/26 05:20:36.297143 Bridge type is bridged_fastdp
INFO: 2023/09/26 05:20:36.297158 Communication between peers is unencrypted.
INFO: 2023/09/26 05:20:36.400397 Our name is c2:76:26:b7:8a:18(nodea)
INFO: 2023/09/26 05:20:36.400493 Launch detected - using supplied peer list: [192.168.1.50 192.168.1.111]
INFO: 2023/09/26 05:20:36.400542 Using "no-masq-local" LocalRangeTracker
INFO: 2023/09/26 05:20:36.400558 Checking for pre-existing addresses on weave bridge
INFO: 2023/09/26 05:20:36.400717 weave bridge has address 10.244.1.192/24
INFO: 2023/09/26 05:20:36.400728 weave bridge has address 10.40.0.0/12
INFO: 2023/09/26 05:20:36.400735 weave bridge has address 10.244.1.128/24
INFO: 2023/09/26 05:20:36.402909 adding entry 10.40.0.0/14 to weaver-no-masq-local of 0
INFO: 2023/09/26 05:20:36.403047 added entry 10.40.0.0/14 to weaver-no-masq-local of 0
INFO: 2023/09/26 05:20:36.409257 adding entry 10.44.0.0/15 to weaver-no-masq-local of 0
INFO: 2023/09/26 05:20:36.409283 added entry 10.44.0.0/15 to weaver-no-masq-local of 0
INFO: 2023/09/26 05:20:36.410090 [allocator c2:76:26:b7:8a:18] Initialising with persisted data
INFO: 2023/09/26 05:20:36.410117 [allocator c2:76:26:b7:8a:18] Address 10.244.1.192/24 claimed by weave:expose - not in our range
INFO: 2023/09/26 05:20:36.410136 [allocator c2:76:26:b7:8a:18] Address 10.244.1.128/24 claimed by weave:expose - not in our range
INFO: 2023/09/26 05:20:36.410248 Sniffing traffic on datapath (via ODP)
INFO: 2023/09/26 05:20:36.410565 ->[192.168.1.50:6783] attempting connection
INFO: 2023/09/26 05:20:36.410697 ->[192.168.1.111:6783] attempting connection
INFO: 2023/09/26 05:20:36.414313 ->[192.168.1.50:6783|26:23:8d:cf:1e:49(masternode)]: connection ready; using protocol version 2
INFO: 2023/09/26 05:20:36.414559 overlay_switch ->[26:23:8d:cf:1e:49(masternode)] using fastdp
INFO: 2023/09/26 05:20:36.414696 ->[192.168.1.50:6783|26:23:8d:cf:1e:49(masternode)]: connection added (new peer)
INFO: 2023/09/26 05:20:36.415294 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2023/09/26 05:20:36.415329 Listening for metrics requests on 0.0.0.0:6782
INFO: 2023/09/26 05:20:36.420376 overlay_switch ->[26:23:8d:cf:1e:49(masternode)] using sleeve
INFO: 2023/09/26 05:20:36.420406 ->[192.168.1.50:6783|26:23:8d:cf:1e:49(masternode)]: connection fully established
INFO: 2023/09/26 05:20:36.420569 overlay_switch ->[26:23:8d:cf:1e:49(masternode)] using fastdp
INFO: 2023/09/26 05:20:36.423885 sleeve ->[192.168.1.50:6783|26:23:8d:cf:1e:49(masternode)]: Effective MTU verified at 1438
INFO: 2023/09/26 05:20:36.512608 ->[192.168.1.111:6783] error during connection attempt: dial tcp :0->192.168.1.111:6783: connect: connection refused
INFO: 2023/09/26 05:20:36.849680 [kube-peers] Added myself to peer list &{[{26:23:8d:cf:1e:49 masternode} {7a:dc:f6:95:9e:1c nodeb} {c2:76:26:b7:8a:18 nodea}]}
DEBU: 2023/09/26 05:20:36.875801 [kube-peers] Nodes that have disappeared: map[]
10.40.0.0
DEBU: 2023/09/26 05:20:37.100967 registering for updates for node delete events
INFO: 2023/09/26 05:20:38.499832 Discovered remote MAC 5e:c2:43:5e:f1:69 at 26:23:8d:cf:1e:49(masternode)
INFO: 2023/09/26 05:20:38.500305 Discovered remote MAC 5a:f4:60:ae:12:cd at 26:23:8d:cf:1e:49(masternode)
INFO: 2023/09/26 05:20:38.960669 Error checking version: Get "https://checkpoint-api.weave.works/v1/check/weave-net?arch=amd64&flag_docker-version=none&flag_kernel-version=5.15.0-83-generic&flag_network=fastdp&os=linux&signature=54QsVHRj4jPk%2F5Bxn0hI0JsJzggXiRyZe8o3d1H2iBg%3D&version=git-34de0b10a69c": dial tcp: lookup checkpoint-api.weave.works on 10.96.0.10:53: no such host
INFO: 2023/09/26 05:20:39.104883 ->[192.168.1.111:6783] attempting connection
INFO: 2023/09/26 05:20:39.174657 ->[192.168.1.111:6783] error during connection attempt: dial tcp :0->192.168.1.111:6783: connect: connection refused
INFO: 2023/09/26 05:20:40.940146 ->[192.168.1.111:6783] attempting connection
INFO: 2023/09/26 05:20:41.017780 ->[192.168.1.111:6783] error during connection attempt: dial tcp :0->192.168.1.111:6783: connect: connection refused
WARN: 2023/09/26 05:20:44.382651 [allocator]: Delete: no addresses for 4d0b75c682b813c46857f8fd664ad6c2bbf0d2a7546cb16ebab2b8bc69b1326a
INFO: 2023/09/26 05:20:45.885998 ->[192.168.1.111:6783] attempting connection
INFO: 2023/09/26 05:20:45.888609 ->[192.168.1.111:6783] error during connection attempt: dial tcp :0->192.168.1.111:6783: connect: connection refused
INFO: 2023/09/26 05:20:54.431106 ->[192.168.1.111:6783] attempting connection
INFO: 2023/09/26 05:20:54.433478 ->[192.168.1.111:6783] error during connection attempt: dial tcp :0->192.168.1.111:6783: connect: connection refused
INFO: 2023/09/26 05:21:03.615051 ->[192.168.1.111:6783] attempting connection
INFO: 2023/09/26 05:21:03.622726 ->[192.168.1.111:6783] error during connection attempt: dial tcp :0->192.168.1.111:6783: connect: connection refused
WARN: 2023/09/26 05:21:05.378496 [allocator]: Delete: no addresses for 4d0b75c682b813c46857f8fd664ad6c2bbf0d2a7546cb16ebab2b8bc69b1326a
WARN: 2023/09/26 05:21:05.392874 [allocator]: Delete: no addresses for 4d0b75c682b813c46857f8fd664ad6c2bbf0d2a7546cb16ebab2b8bc69b1326a
checkpoint-api.weave.works is not resolvable.
.. full logs below in gist
Sep 26 11:15:19 nodeB kubelet[908188]: E0926 11:15:19.246141 908188 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx pod=nginx2-sbx-deployment-5d8888c5ff-gqngv_default(a5909e00-970b-4881-8e02-28ea4ca5c34a)\"" pod="default/nginx2-sbx-deployment-5d8888c5ff-gqngv" podUID=a5909e00-970b-4881-8e02-28ea4ca5c34a
Sep 26 11:15:25 nodeB kubelet[908188]: I0926 11:15:25.267824 908188 scope.go:115] "RemoveContainer" containerID="e21d93b3bfe3ce68e2c2eecc05f8832c6c47c8af3c6e44a8249845c5eace8c71"
Sep 26 11:15:25 nodeB kubelet[908188]: I0926 11:15:25.267875 908188 scope.go:115] "RemoveContainer" containerID="b75aaa95ca35fc598bb08682722a8296c7a040f124fbb94c28c680187295109a"
Sep 26 11:15:25 nodeB kubelet[908188]: E0926 11:15:25.268778 908188 pod_workers.go:1294] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"weave\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave pod=weave-net-cggmf_kube-system(4c472fb5-259c-43d4-8e4e-b1ec08ce14a7)\", failed to \"StartContainer\" for \"weave-npc\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave-npc pod=weave-net-cggmf_kube-system(4c472fb5-259c-43d4-8e4e-b1ec08ce14a7)\"]" pod="kube-system/weave-net-cggmf" podUID=4c472fb5-259c-43d4-8e4e-b1ec08ce14a7
Sep 26 11:15:32 nodeB kubelet[908188]: I0926 11:15:32.265685 908188 scope.go:115] "RemoveContainer" containerID="981ed9ca1619dcc503dd143d071dbb87f71e092125e9ab6fe0a951531efe5e09"
Sep 26 11:15:32 nodeB kubelet[908188]: E0926 11:15:32.266104 908188 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx pod=nginx2-sbx-deployment-5d8888c5ff-gqngv_default(a5909e00-970b-4881-8e02-28ea4ca5c34a)\"" pod="default/nginx2-sbx-deployment-5d8888c5ff-gqngv" podUID=a5909e00-970b-4881-8e02-28ea4ca5c34a
Sep 26 11:15:38 nodeB kubelet[908188]: I0926 11:15:38.264019 908188 scope.go:115] "RemoveContainer" containerID="e21d93b3bfe3ce68e2c2eecc05f8832c6c47c8af3c6e44a8249845c5eace8c71"
Sep 26 11:15:38 nodeB kubelet[908188]: I0926 11:15:38.264065 908188 scope.go:115] "RemoveContainer" containerID="b75aaa95ca35fc598bb08682722a8296c7a040f124fbb94c28c680187295109a"
Sep 26 11:15:38 nodeB kubelet[908188]: E0926 11:15:38.265002 908188 pod_workers.go:1294] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"weave\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave pod=weave-net-cggmf_kube-system(4c472fb5-259c-43d4-8e4e-b1ec08ce14a7)\", failed to \"StartContainer\" for \"weave-npc\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave-npc pod=weave-net-cggmf_kube-system(4c472fb5-259c-43d4-8e4e-b1ec08ce14a7)\"]" pod="kube-system/weave-net-cggmf" podUID=4c472fb5-259c-43d4-8e4e-b1ec08ce14a7
Sep 26 11:15:47 nodeB kubelet[908188]: I0926 11:15:47.263312 908188 scope.go:115] "RemoveContainer" containerID="981ed9ca1619dcc503dd143d071dbb87f71e092125e9ab6fe0a951531efe5e09"
Sep 26 11:15:47 nodeB containerd[721166]: time="2023-09-26T11:15:47.266443274+05:45" level=info msg="CreateContainer within sandbox \"f3e67cf37da64c3dd16fdab7d1272d2326a1540b8cfea19c4c8d509939897a55\" for container &ContainerMetadata{Name:nginx,Attempt:53,}"
Sep 26 11:15:47 nodeB containerd[721166]: time="2023-09-26T11:15:47.282015414+05:45" level=info msg="CreateContainer within sandbox \"f3e67cf37da64c3dd16fdab7d1272d2326a1540b8cfea19c4c8d509939897a55\" for &ContainerMetadata{Name:nginx,Attempt:53,} returns container id \"0b173b9f9e11081448a3d185d5d6df992f6ede882114003da74e29a263826bad\""
Sep 26 11:15:47 nodeB containerd[721166]: time="2023-09-26T11:15:47.282517418+05:45" level=info msg="StartContainer for \"0b173b9f9e11081448a3d185d5d6df992f6ede882114003da74e29a263826bad\""
Sep 26 11:15:47 nodeB containerd[721166]: time="2023-09-26T11:15:47.314167622+05:45" level=info msg="StartContainer for \"0b173b9f9e11081448a3d185d5d6df992f6ede882114003da74e29a263826bad\" returns successfully"
Sep 26 11:15:53 nodeB kubelet[908188]: I0926 11:15:53.265452 908188 scope.go:115] "RemoveContainer" containerID="e21d93b3bfe3ce68e2c2eecc05f8832c6c47c8af3c6e44a8249845c5eace8c71"
Sep 26 11:15:53 nodeB kubelet[908188]: I0926 11:15:53.265501 908188 scope.go:115] "RemoveContainer" containerID="b75aaa95ca35fc598bb08682722a8296c7a040f124fbb94c28c680187295109a"
Sep 26 11:15:53 nodeB kubelet[908188]: E0926 11:15:53.266688 908188 pod_workers.go:1294] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"weave\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave pod=weave-net-cggmf_kube-system(4c472fb5-259c-43d4-8e4e-b1ec08ce14a7)\", failed to \"StartContainer\" for \"weave-npc\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave-npc pod=weave-net-cggmf_kube-system(4c472fb5-259c-43d4-8e4e-b1ec08ce14a7)\"]" pod="kube-system/weave-net-cggmf" podUID=4c472fb5-259c-43d4-8e4e-b1ec08ce14a7
Sep 26 11:16:08 nodeB kubelet[908188]: I0926 11:16:08.222529 908188 scope.go:115] "RemoveContainer" containerID="e21d93b3bfe3ce68e2c2eecc05f8832c6c47c8af3c6e44a8249845c5eace8c71"
Sep 26 11:16:08 nodeB kubelet[908188]: I0926 11:16:08.222589 908188 scope.go:115] "RemoveContainer" containerID="b75aaa95ca35fc598bb08682722a8296c7a040f124fbb94c28c680187295109a"
Sep 26 11:16:08 nodeB kubelet[908188]: E0926 11:16:08.223595 908188 pod_workers.go:1294] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"weave\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave pod=weave-net-cggmf_kube-system(4c472fb5-259c-43d4-8e4e-b1ec08ce14a7)\", failed to \"StartContainer\" for \"weave-npc\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave-npc pod=weave-net-cggmf_kube-system(4c472fb5-259c-43d4-8e4e-b1ec08ce14a7)\"]" pod="kube-system/weave-net-cggmf" podUID=4c472fb5-259c-43d4-8e4e-b1ec08ce14a7
there is similar issue on another worker node as well
How to reproduce it?
on fresh install of k8 (1controlplane master and 2 workernodes, use weave deployment from [official site](kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml) )
Anything else we need to know?
Physical machines
Versions:
weave version v2.8.1
$ uname -a
Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-83-generic x86_64) on all 3 machines
$ kubectl version
Client Version: v1.27.4
Kustomize Version: v5.0.1
Server Version: v1.27.4
Logs:
$ kubectl logs -n kube-system <weave-net-pod> weave
attached above
More logs
https://gist.github.com/sherpaurgen/47e53d7bc637cf86c3b3d4e7aa643824
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal Starting node/masternode Starting kubelet.
24m Warning InvalidDiskCapacity node/masternode invalid capacity 0 on image filesystem
24m Normal NodeHasSufficientMemory node/masternode Node masternode status is now: NodeHasSufficientMemory
24m Normal NodeHasNoDiskPressure node/masternode Node masternode status is now: NodeHasNoDiskPressure
24m Normal NodeHasSufficientPID node/masternode Node masternode status is now: NodeHasSufficientPID
24m Normal NodeAllocatableEnforced node/masternode Updated Node Allocatable limit across pods
23m Normal Starting node/masternode
23m Normal RegisteredNode node/masternode Node masternode event: Registered Node masternode in Controller
9m4s Normal SandboxChanged pod/nginx-sbx-deployment-5d8888c5ff-4sk4v Pod sandbox changed, it will be killed and re-created.
18m Normal Killing pod/nginx-sbx-deployment-5d8888c5ff-4sk4v Stopping container nginx
34m Warning BackOff pod/nginx-sbx-deployment-5d8888c5ff-4sk4v Back-off restarting failed container nginx in pod nginx-sbx-deployment-5d8888c5ff-4sk4v_default(ad4902b0-f978-4088-a610-100d78e490bd)
4m Warning FailedKillPod pod/nginx-sbx-deployment-5d8888c5ff-4sk4v error killing pod: failed to "KillPodSandbox" for "ad4902b0-f978-4088-a610-100d78e490bd" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4b3b18e1d2dd8888bbc3b031d9fbb2fbe5f71200c6590dd1bd8b40c549fd3a8\": plugin type=\"weave-net\" name=\"weave\" failed (delete): Delete \"http://127.0.0.1:6784/ip/e4b3b18e1d2dd8888bbc3b031d9fbb2fbe5f71200c6590dd1bd8b40c549fd3a8\": dial tcp 127.0.0.1:6784: connect: connection refused"
58m Normal Pulled pod/nginx-sbx-deployment-5d8888c5ff-fj2jq Container image "nginx:1.24" already present on machine
12m Normal Killing pod/nginx-sbx-deployment-5d8888c5ff-fj2jq Stopping container nginx
8m17s Normal SandboxChanged pod/nginx-sbx-deployment-5d8888c5ff-fj2jq Pod sandbox changed, it will be killed and re-created.
83m Warning BackOff pod/nginx-sbx-deployment-5d8888c5ff-fj2jq Back-off restarting failed container nginx in pod nginx-sbx-deployment-5d8888c5ff-fj2jq_default(fea7e842-cb7e-42a9-a452-9438451710b4)
3m12s Warning FailedKillPod pod/nginx-sbx-deployment-5d8888c5ff-fj2jq error killing pod: failed to "KillPodSandbox" for "fea7e842-cb7e-42a9-a452-9438451710b4" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"248dfef27a109fdbb7ed87dbec75919b6e94da7a4e3a1e1c0db0db24249c54e1\": plugin type=\"weave-net\" name=\"weave\" failed (delete): Delete \"http://127.0.0.1:6784/ip/248dfef27a109fdbb7ed87dbec75919b6e94da7a4e3a1e1c0db0db24249c54e1\": dial tcp 127.0.0.1:6784: connect: connection refused"
3m40s Normal Scheduled pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Successfully assigned default/nginx2-sbx-deployment-5d8888c5ff-5vgdf to nodea
3m40s Warning FailedCreatePodSandBox pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e42305556b393685737551d0001539b1d336a35466caf40e2c703a887e8562a6": plugin type="weave-net" name="weave" failed (add): unable to allocate IP address: Post "http://127.0.0.1:6784/ip/e42305556b393685737551d0001539b1d336a35466caf40e2c703a887e8562a6": dial tcp 127.0.0.1:6784: connect: connection refused
6s Normal SandboxChanged pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Pod sandbox changed, it will be killed and re-created.
2m45s Normal Pulled pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Container image "nginx:1.24" already present on machine
2m45s Normal Created pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Created container nginx
2m45s Normal Started pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Started container nginx
32s Normal Killing pod/nginx2-sbx-deployment-5d8888c5ff-5vgdf Stopping container nginx
3m41s Normal Scheduled pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Successfully assigned default/nginx2-sbx-deployment-5d8888c5ff-gqngv to nodeb
3m40s Warning FailedCreatePodSandBox pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "713ed7fc025a5dac1ebd36fb05fe22c1201539d5c734f5373cf512f7d22369ff": plugin type="weave-net" name="weave" failed (add): unable to allocate IP address: Post "http://127.0.0.1:6784/ip/713ed7fc025a5dac1ebd36fb05fe22c1201539d5c734f5373cf512f7d22369ff": dial tcp 127.0.0.1:6784: connect: connection refused
2m21s Normal SandboxChanged pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Pod sandbox changed, it will be killed and re-created.
2m21s Normal Pulled pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Container image "nginx:1.24" already present on machine
2m21s Normal Created pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Created container nginx
2m21s Normal Started pod/nginx2-sbx-deployment-5d8888c5ff-gqngv Started container nginx
3m41s Normal SuccessfulCreate replicaset/nginx2-sbx-deployment-5d8888c5ff Created pod: nginx2-sbx-deployment-5d8888c5ff-gqngv
3m41s Normal SuccessfulCreate replicaset/nginx2-sbx-deployment-5d8888c5ff Created pod: nginx2-sbx-deployment-5d8888c5ff-5vgdf
3m41s Normal ScalingReplicaSet deployment/nginx2-sbx-deployment Scaled up replica set nginx2-sbx-deployment-5d8888c5ff to 2
77m Normal Starting node/nodea
75m Normal Starting node/nodea
73m Normal Starting node/nodea
71m Normal Starting node/nodea
67m Normal Starting node/nodea
60m Normal Starting node/nodea
52m Normal Starting node/nodea
46m Normal Starting node/nodea
40m Normal Starting node/nodea
29m Normal Starting node/nodea
23m Normal RegisteredNode node/nodea Node nodea event: Registered Node nodea in Controller
22m Normal Starting node/nodea
20m Normal Starting node/nodea
19m Normal Starting node/nodea
14m Normal Starting node/nodea
9m49s Normal Starting node/nodea
51s Normal Starting node/nodea
82m Normal Starting node/nodeb
76m Normal Starting node/nodeb
69m Normal Starting node/nodeb
63m Normal Starting node/nodeb
57m Normal Starting node/nodeb
50m Normal Starting node/nodeb
44m Normal Starting node/nodeb
37m Normal Starting node/nodeb
31m Normal Starting node/nodeb
23m Normal RegisteredNode node/nodeb Node nodeb event: Registered Node nodeb in Controller
19m Normal Starting node/nodeb
13m Normal Starting node/nodeb
7m33s Normal Starting node/nodeb
2m20s Normal Starting node/nodeb