Open
Description
Problem Description
I'm not sure if this is expected behaviour or not so I figured I would report it. If I try to re-apply the sveltosclusters.lib.projectsveltos.io
CRD I get the following behaviour:
❯ kubectl apply \
--kubeconfig ~/.kube/configs/kind-management-cluster.yaml \
--context kind-management-cluster \
--server-side=true \
-f manifests
<snip>
customresourcedefinition.apiextensions.k8s.io/sveltosclusters.lib.projectsveltos.io serverside-applied
error: Apply failed with 1 conflict: conflict with "application/apply-patch": .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
ManagedFields conflict, click to expand.
❯ k get --show-managed-fields crd sveltosclusters.lib.projectsveltos.io -o yaml | yq 'del(.status) | .metadata.managedFields'
- apiVersion: apiextensions.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:controller-gen.kubebuilder.io/version: {}
f:spec:
f:group: {}
f:names:
f:kind: {}
f:listKind: {}
f:plural: {}
f:singular: {}
f:scope: {}
f:versions: {}
manager: kubectl
operation: Apply
time: "2024-05-05T02:10:49Z"
- apiVersion: apiextensions.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:acceptedNames:
f:kind: {}
f:listKind: {}
f:plural: {}
f:singular: {}
f:conditions:
k:{"type":"Established"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"NamesAccepted"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: kube-apiserver
operation: Update
subresource: status
time: "2024-05-05T02:10:49Z"
System Information
INSTRUCTIONS: Provide the system and application information below.
CLUSTERAPI VERSION: v1.6.3
SVELTOS VERSION: 0.28.0
KUBERNETES VERSION: 1.28.8
Other
I can work around the problem by setting --force-conflicts=true
and things seem to be okay...