Metric server with weave net can't collect monitoring data from the node where his pod is placed #3434
Description
Metric server can't collect monitoring data from the node where his pod is placed, I think it is policy or networking problem as I cant ping from the pod to its host (r2s13) while I am able to ping the other two nodes. (same result when I tested on another pod)
I am trying to figure out why Metric server isn't collecting stats from the node where his pod is placed. There are 3 nodes in my cluster (master and 2 workers)
metric server version is 0.3.1
kubernetes version is 1.12 installed with kubeadm
CNI plugin is weave net
"kubectl top node" output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
r2s12 344m 4% 3079Mi 12%
r2s14 67m 0% 1695Mi 21%
r2s13
in metric server log I have the below line repeated:
E1023 15:28:14.643011 1 manager.go:102] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:r2s13: unable to fetch metrics from Kubelet r2s13 (10.199.183.218): Get https://10.199.183.218:10250/stats/summary/: dial tcp 10.199.183.218:10250: i/o timeout
"netstat -a | grep 10250" output from r2s13:
tcp6 0 0 [::]:10250 [::]:* LISTEN
tcp6 0 0 r2s13.r2s13:10250 r2s12.r2s12:33950 ESTABLISHED
"netstat -a | grep 10250" output from inside the pod:
tcp 0 1 metrics-server-7fbd9b8589-r9nnv:60384 10.199.183.218:10250 SYN_SENT
tcp 0 0 metrics-server-7fbd9b8589-r9nnv:43876 10-199-183-217.kubernetes.default.svc.cluster.local:10250 ESTABLISHED
tcp 0 0 metrics-server-7fbd9b8589-r9nnv:45926 10.199.183.219:10250 ESTABLISHED
I do not know if the default setting of weave net npc is blocking the traffic from the pod to its hosts, or it is something else.
Activity