You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps did you take and what happened:
I have an existing cluster running k8s 1.32 deployed roughly a year ago, and now I want to upgrade it to 1.33. My upgrade procedure involves:
Create a new VSphereMachineTemplate based on the existing one, with new name and template image.
Manually edit KubeadmControlPlane's .spec.machineTemplate.infrastructureRef.name and .spec.version to point to the new machine template and k8s version. I also edited kube-vip version from 0.6.4 to 0.8.10 since I have been having a lot of problems with 0.6.4.
Wait for the new nodes to be rolled out. However, the new node is stuck in Provisioned state while capi-kubeadm-control-plane-controller-manager complains "Waiting for control plane to pass preflight checks to continue reconciliation".
I tried to remove the extraArgs in the KubeadmControlPlane and rollout again, this time the new node's kube-apiserver doesn't complain about cloud-provider argument anymore, however there are still errors in during startup. After some manual restart of the kube-* pods by moving manifests out of and back into /etc/kubernetes/manifests, I managed to get those pods to start without errors, but capi-kubeadm-control-plane-controller-manager still complains "Waiting for control plane to pass preflight checks to continue reconciliation".
What did you expect to happen:
New nodes to be rolled out properly.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
Cluster-api-provider-vsphere version:
$ clusterctl upgrade plan
Checking if cert-manager needs upgrade...
Cert-Manager will be upgraded from "v1.16.2" to "v1.19.3"
Checking new release availability...
Latest release available for the v1beta1 Cluster API contract version:
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v1.9.4 v1.10.10
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v1.9.4 v1.10.10
cluster-api capi-system CoreProvider v1.9.4 v1.10.10
infrastructure-vsphere capv-system InfrastructureProvider v1.12.0 v1.15.2
The current version of clusterctl could not upgrade to v1beta1 contract (only v1beta2 supported).
Latest release available for the v1beta2 Cluster API contract version:
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v1.9.4 v1.12.3
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v1.9.4 v1.12.3
cluster-api capi-system CoreProvider v1.9.4 v1.12.3
infrastructure-vsphere capv-system InfrastructureProvider v1.12.0 v1.15.2
You can now apply the upgrade by executing the following command:
clusterctl upgrade apply --contract v1beta2
/kind bug
What steps did you take and what happened:
I have an existing cluster running k8s 1.32 deployed roughly a year ago, and now I want to upgrade it to 1.33. My upgrade procedure involves:
VSphereMachineTemplatebased on the existing one, with new name and template image.KubeadmControlPlane's.spec.machineTemplate.infrastructureRef.nameand.spec.versionto point to the new machine template and k8s version. I also editedkube-vipversion from 0.6.4 to 0.8.10 since I have been having a lot of problems with 0.6.4.Provisionedstate whilecapi-kubeadm-control-plane-controller-managercomplains "Waiting for control plane to pass preflight checks to continue reconciliation".kube-apiserver's log only contains one line similar to this From cluster create template Investigate and remove cloud-provider flag for kube-apiserver due to removal in v1.33 cluster-api-provider-ibmcloud#2224.extraArgsin theKubeadmControlPlaneand rollout again, this time the new node'skube-apiserverdoesn't complain about cloud-provider argument anymore, however there are still errors in during startup. After some manual restart of thekube-*pods by moving manifests out of and back into/etc/kubernetes/manifests, I managed to get those pods to start without errors, butcapi-kubeadm-control-plane-controller-managerstill complains "Waiting for control plane to pass preflight checks to continue reconciliation".What did you expect to happen:
New nodes to be rolled out properly.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version):v1.30.1+vmware.1-fips(tanzu)/etc/os-release):