-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Which jobs are flaking?
periodic-cluster-api-e2e-mink8s-main: capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.9=>current) [ClusterClass] Should create a management cluster and then upgrade all the providers [ClusterClass]
periodic-cluster-api-e2e-mink8s-release-1-11: capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.8=>current) [ClusterClass] Should create a management cluster and then upgrade all the providers [ClusterClass]
periodic-cluster-api-e2e-release-1-10: capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.8=>current) [ClusterClass] Should create a management cluster and then upgrade all the providers [ClusterClass]
periodic-cluster-api-e2e-mink8s-release-1-10: capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.9=>current) on K8S latest ci mgmt cluster [ClusterClass] Should create a management cluster and then upgrade all the providers [ClusterClass]
Which tests are flaking?
periodic-cluster-api-e2e-mink8s-main: capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.9=>current) [ClusterClass] Should create a management cluster and then upgrade all the providers [ClusterClass]
periodic-cluster-api-e2e-mink8s-release-1-11: capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.8=>current) [ClusterClass] Should create a management cluster and then upgrade all the providers [ClusterClass]
periodic-cluster-api-e2e-release-1-10: capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.8=>current) [ClusterClass] Should create a management cluster and then upgrade all the providers [ClusterClass]
periodic-cluster-api-e2e-mink8s-release-1-10: capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.9=>current) on K8S latest ci mgmt cluster [ClusterClass] Should create a management cluster and then upgrade all the providers [ClusterClass]
Since when has it been flaking?
Testgrid link
No response
Reason for failure (if possible)
Timed out after 300.001s.
Timed out waiting for Machine Deployment clusterctl-upgrade/clusterctl-upgrade-workload-kdcch4-md-0-btv77 to have 2 replicas
The function passed to Eventually returned the following error:
<*errors.fundamental | 0xc000b1a8a0>:
Machine count does not match existing nodes count
{
msg: "Machine count does not match existing nodes count",
stack: [0x22037ac, 0x503306, 0x502419, 0x9860df, 0x987182, 0x9846a5, 0x22031cf, 0x26934c9, 0x960113, 0x975513, 0x484341],
}
At one point, however, the function did return successfully.
Yet, Eventually failed because the matcher was not satisfied:
Expected
<int>: 1
to equal
<int>: 2
[FAILED] Timed out after 300.001s.
Timed out waiting for Machine Deployment clusterctl-upgrade/clusterctl-upgrade-workload-kdcch4-md-0-btv77 to have 2 replicas
The function passed to Eventually returned the following error:
<*errors.fundamental | 0xc000b1a8a0>:
Machine count does not match existing nodes count
{
msg: "Machine count does not match existing nodes count",
stack: [0x22037ac, 0x503306, 0x502419, 0x9860df, 0x987182, 0x9846a5, 0x22031cf, 0x26934c9, 0x960113, 0x975513, 0x484341],
}
At one point, however, the function did return successfully.
Yet, Eventually failed because the matcher was not satisfied:
Expected
<int>: 1
to equal
<int>: 2
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinedeployment_helpers.go:776 @ 12/15/25 14:49:53.267
Anything else we need to know?
Seems clusterctl upgrade test from version 1.8 and 1.9 to current is flaking. Does this require any action when the versions are this old?
Label(s) to be applied
/kind flake
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.