Closed
Description
What happened?
Deployed a k3s vcluster:
$ vcluster --version
vcluster version 0.24.1
$ vcluster create myvcluster --distro k3s -n tenant1
.
.
.
.
$ kubectl get pod myvcluster-0 -n tenant1 -o yaml | grep image | grep rancher
image: rancher/k3s:v1.32.1-k3s1
image: docker.io/rancher/k3s:v1.32.1-k3s1
imageID: docker.io/rancher/k3s@sha256:2ce69284b7f28bd1bde9f8503f0eb7fe77943cdbe3a88564b73ed6aef8b180d3
Then run an upgrade with the following YAML:
controlPlane:
distro:
k3s:
enabled: true
statefulSet:
scheduling:
podManagementPolicy: OrderedReady
sync:
fromHost:
ingressClasses:
enabled: true
toHost:
ingresses:
enabled: true
with the command:
$ vcluster create myvcluster --upgrade -f ./vcluster-values.yaml -n tenant1
18:30:29 fatal seems like you were using k8s as a distro before and now have switched to k3s, please make sure to not switch between vCluster distros
What did you expect to happen?
Complete the upgrade
How can we reproduce it (as minimally and precisely as possible)?
All steps provided above
Anything else we need to know?
running on a single node:
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.30.0
Host cluster Kubernetes version
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.30.0
vcluster version
$ vcluster --version
vcluster version 0.24.1
VCluster Config
see issue description
</details>