-
Notifications
You must be signed in to change notification settings - Fork 1k
Description
What happened:
- The karmada cluster reuse exist apiserver as karmada-apiserver according to the document https://karmada.io/docs/faq/#can-i-install-karmada-in-a-kubernetes-cluster-and-reuse-the-kube-apiserver-as-karmada-apiserver.
- The template resource status to be modified by the
aggrategatestatus.
Encounter Scenarios:
I want to clean resource in member cluster, so I delete propagation and override at same time.
But i found that the Status.capacity of the successfully synchronized PVC has been removed by karmada-controller-manager. It only set Status.phase.
And then kube-controller-manager add status.accessModes for pvc.
That is derived from the problem #3873
apiserver log
I0804 17:13:03.466570 514403 httplog.go:129] "HTTP" verb="PUT" URI="/api/v1/namespaces/smartsales/persistentvolumeclaims/milvus-etcd-data/status" latency="30.230827ms" userAgent="karmada-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format" audit-ID="ce98ac98-174c-43c0-9281-4c9c5901d512" srcIP="10.29.8.83:45524" apf_pl="exempt" apf_fs="exempt" resp=200
I0804 17:13:03.480505 514403 httplog.go:129] "HTTP" verb="PUT" URI="/api/v1/namespaces/smartsales/persistentvolumeclaims/milvus-etcd-data/status" latency="12.009373ms" userAgent="kube-controller-manager/v1.23.17 (linux/amd64) kubernetes/416a680/system:serviceaccount:kube-system:persistent-volume-binder" audit-ID="15908b53-7079-45f6-ba0e-91898803a277" srcIP="10.11.96.15:45277" apf_pl="workload-high" apf_fs="kube-system-service-accounts" resp=200
What you expected to happen:
The PVC status not to be change on control plane cluster.
How to reproduce it (as minimally and precisely as possible):
rule.yaml
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
name: workload-propagation-policy
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
- apiVersion: apps/v1
kind: StatefulSet
propagateDeps: true
placement:
clusterAffinity:
clusterNames:
- sh5-online
---
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterOverridePolicy
metadata:
name: workload-override-policy
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
- apiVersion: apps/v1
kind: StatefulSet
overrideRules:
- overriders:
plaintext:
- path: /spec/replicas
operator: replace
value: 0
- path: /spec/template/spec/tolerations
operator: add
value:
- key: "eks.tke.cloud.tencent.com/eklet"
operator: "Exists"
effect: "NoSchedule"
- path: /spec/template/spec/schedulerName
operator: replace
value: "tke-scheduler"
---
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterOverridePolicy
metadata:
name: pvc
spec:
resourceSelectors:
- apiVersion: v1
kind: PersistentVolumeClaim
overrideRules:
- overriders:
plaintext:
- path: /spec/volumeName
operator: remove
- kubectl apply -f rule.yaml
- wait the all resource to be synchronized
- kubectl delete -f rule.yaml
- kubectl get pvc on control plane cluster.
Anything else we need to know?:
The code of kube-controller-manager update pvc status https://github.com/kubernetes/kubernetes/blob/1635c380b26a1d8cc25d36e9feace9797f4bae3c/pkg/controller/volume/persistentvolume/pv_controller.go#L471-L532
karmada controller update pvc status
| err = updateResourceStatus(c.DynamicClient, c.RESTMapper, c.ResourceInterpreter, resourceTemplate, binding.Status) |
karmada/pkg/resourceinterpreter/default/native/aggregatestatus.go
Lines 470 to 506 in fdc7ac6
| func aggregatePersistentVolumeClaimStatus(object *unstructured.Unstructured, aggregatedStatusItems []workv1alpha2.AggregatedStatusItem) (*unstructured.Unstructured, error) { | |
| pvc := &corev1.PersistentVolumeClaim{} | |
| err := helper.ConvertToTypedObject(object, pvc) | |
| if err != nil { | |
| return nil, err | |
| } | |
| newStatus := &corev1.PersistentVolumeClaimStatus{Phase: corev1.ClaimBound} | |
| for _, item := range aggregatedStatusItems { | |
| if item.Status == nil { | |
| continue | |
| } | |
| temp := &corev1.PersistentVolumeClaimStatus{} | |
| if err = json.Unmarshal(item.Status.Raw, temp); err != nil { | |
| return nil, err | |
| } | |
| klog.V(3).Infof("Grab pvc(%s/%s) status from cluster(%s), phase: %s", pvc.Namespace, | |
| pvc.Name, item.ClusterName, temp.Phase) | |
| if temp.Phase == corev1.ClaimLost { | |
| newStatus.Phase = corev1.ClaimLost | |
| break | |
| } | |
| if temp.Phase != corev1.ClaimBound { | |
| newStatus.Phase = temp.Phase | |
| } | |
| } | |
| if reflect.DeepEqual(pvc.Status, *newStatus) { | |
| klog.V(3).Infof("Ignore update pvc(%s/%s) status as up to date", pvc.Namespace, pvc.Name) | |
| return object, nil | |
| } | |
| pvc.Status = *newStatus | |
| return helper.ToUnstructured(pvc) | |
| } |
Environment:
- Karmada version:
- kubectl-karmada or karmadactl version (the result of
kubectl-karmada versionorkarmadactl version): - Others:
kubectl karmada version: version.Info{GitVersion:"v1.6.1", GitCommit:"fdc7ac62c70b571d091a795cbe9b9fceac5f1c2c", GitTreeState:"clean", BuildDate:"2023-07-06T03:35:37Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}