-
Notifications
You must be signed in to change notification settings - Fork 6.9k
Description
What happened?
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838837]: ## Expiration before renewal ##
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: [check-expiration] Reading configuration from the cluster...
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: W0105 03:12:34.568186 838839 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [192.168.0.10]; the provided value is: [169.254.25.10]
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: admin.conf Nov 30, 2026 19:12 UTC 329d ca no
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: apiserver Nov 30, 2026 19:12 UTC 329d ca no
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: apiserver-kubelet-client Nov 30, 2026 19:12 UTC 329d ca no
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: controller-manager.conf Nov 30, 2026 19:12 UTC 329d ca no
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: front-proxy-client Nov 30, 2026 19:12 UTC 329d front-proxy-ca no
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: scheduler.conf Nov 30, 2026 19:12 UTC 329d ca no
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: !MISSING! super-admin.conf
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: ca Nov 20, 2034 07:01 UTC 8y no
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838839]: front-proxy-ca Nov 20, 2034 07:01 UTC 8y no
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838837]: ## Renewing certificates managed by kubeadm ##
Jan 5 03:12:34 oif-kvm2 k8s-certs-renew.sh[838965]: [renew] Reading configuration from the cluster...
Jan 5 03:12:36 oif-kvm2 kubelet[631343]: I0105 03:12:36.053072 631343 status_manager.go:851] "Failed to get status for pod" podUID="e5d3ecbffb266ee324389ac118b547c8" pod="kube-system/kube-scheduler-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
Jan 5 03:12:36 oif-kvm2 kubelet[631343]: I0105 03:12:36.054148 631343 scope.go:117] "RemoveContainer" containerID="d4099339413d933a0471ed2271a65a6660fcdbb00d6e2bdabf8cf9ddc24d4170"
Jan 5 03:12:36 oif-kvm2 kubelet[631343]: I0105 03:12:36.054302 631343 status_manager.go:851] "Failed to get status for pod" podUID="e5d3ecbffb266ee324389ac118b547c8" pod="kube-system/kube-scheduler-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
Jan 5 03:12:36 oif-kvm2 kubelet[631343]: I0105 03:12:36.054549 631343 status_manager.go:851] "Failed to get status for pod" podUID="333cecdea9f40855d65570478aa706d3" pod="kube-system/kube-controller-manager-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
03:12:37 oif-kvm2 kubelet[631343]: I0105 03:12:37.060827 631343 status_manager.go:851] "Failed to get status for pod" podUID="f978f9cf65d8d30ad80b47bd8556e920" pod="kube-system/kube-apiserver-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
Jan 5 03:12:37 oif-kvm2 kubelet[631343]: I0105 03:12:37.061135 631343 status_manager.go:851] "Failed to get status for pod" podUID="e5d3ecbffb266ee324389ac118b547c8" pod="kube-system/kube-scheduler-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
Jan 5 03:12:37 oif-kvm2 kubelet[631343]: I0105 03:12:37.061249 631343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3355e5f4930e51b371865feb615ded22fe3b420649642f3e14ca5ee6502b521"
Jan 5 03:12:37 oif-kvm2 kubelet[631343]: I0105 03:12:37.061376 631343 status_manager.go:851] "Failed to get status for pod" podUID="333cecdea9f40855d65570478aa706d3" pod="kube-system/kube-controller-manager-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
Jan 5 03:12:37 oif-kvm2 kubelet[631343]: I0105 03:12:37.061585 631343 status_manager.go:851] "Failed to get status for pod" podUID="f978f9cf65d8d30ad80b47bd8556e920" pod="kube-system/kube-apiserver-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
Jan 5 03:12:37 oif-kvm2 kubelet[631343]: I0105 03:12:37.062002 631343 status_manager.go:851] "Failed to get status for pod" podUID="e5d3ecbffb266ee324389ac118b547c8" pod="kube-system/kube-scheduler-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
Jan 5 03:12:37 oif-kvm2 kubelet[631343]: I0105 03:12:37.062256 631343 status_manager.go:851] "Failed to get status for pod" podUID="333cecdea9f40855d65570478aa706d3" pod="kube-system/kube-controller-manager-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
Jan 5 03:12:37 oif-kvm2 kubelet[631343]: I0105 03:12:37.062446 631343 status_manager.go:851] "Failed to get status for pod" podUID="f978f9cf65d8d30ad80b47bd8556e920" pod="kube-system/kube-apiserver-oif-kvm2" err="Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-oif-kvm2\": dial tcp 127.0.0.1:6443: connect: connection refused"
###carsh后,k8s-certs-renew.sh定时任务又自动执行了一次,并且这次证书到期时间变成了2027年1月04日,两次执行的证书到期时间不一致,也是很奇怪
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: [check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: admin.conf Jan 04, 2027 19:12 UTC 364d ca no
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: apiserver Jan 04, 2027 19:12 UTC 364d ca no
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: !MISSING! apiserver-etcd-client
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: apiserver-kubelet-client Jan 04, 2027 19:12 UTC 364d ca no
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: controller-manager.conf Jan 04, 2027 19:12 UTC 364d ca no
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: !MISSING! etcd-healthcheck-client
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: !MISSING! etcd-peer
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: !MISSING! etcd-server
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: front-proxy-client Jan 04, 2027 19:12 UTC 364d front-proxy-ca no
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: scheduler.conf Jan 04, 2027 19:12 UTC 364d ca no
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: !MISSING! super-admin.conf
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: ca Nov 20, 2034 07:01 UTC 8y no
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: !MISSING! etcd-ca
Jan 5 03:12:38 oif-kvm2 k8s-certs-renew.sh[840477]: front-proxy-ca Nov 20, 2034 07:01 UTC 8y no
What did you expect to happen?
不应该发生crash
How can we reproduce it (as minimally and precisely as possible)?
偶发,间隔了半年
OS
Other|Unsupported
Version of Ansible
centos7.9
k8s version 1.25
Version of Python
Python 3.9.18
Version of Kubespray (commit)
Network plugin used
calico
Full inventory with variables
1
Command used to invoke ansible
1
Output of ansible run
1
Anything else we need to know
1