Replies: 2 comments 1 reply
-
Hi @rizsk1220, changing the [1] https://strimzi.io/blog/2025/02/13/moving-data-between-jbod-disks-using-cruise-control/ |
Beta Was this translation helpful? Give feedback.
-
We took a volume snapshot of the old Persistent Volume Claim (PVC) and restored that snapshot to a newly created PVC. However, when we attached the new PVCs to the operator, the operator seemingly overwrote them without retaining the data, leading to data loss.
|
Beta Was this translation helpful? Give feedback.
-
Strimzi Kafka installed via Helm chart
Current configuration: 3 ZooKeeper and 3 kafka pods
we have Strimzi deployed on GKE and they were using non-cmek disks.
we tried to migrate data on the non-cmek disk to cmek disk using volumesnapshot tool on GKE cluster.
after the snapshot migration, we tried to attach the new cmek disk to the strimzi operator and tried to bring up but strimzi operator is overwriting the cmek disk with no data.
old
storage:
type: persistent-claim
size: "20Gi"
deleteClaim: true
New
{{- if .Values.storageClassCreate -}}
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.storageClass | default "csi-strimzi-pd-cmek" | quote }}
provisioner: pd.csi.storage.gke.io
volumeBindingMode: {{ .Values.volumeBindingMode | default "Immediate" | quote }}
allowVolumeExpansion: true
parameters:
type: {{ .Values.storageClassParametersType | default "pd-ssd" | quote }}
disk-encryption-kms-key: {{ ......................................... }}
{{- end -}}
How can I properly perform disk maintenance on the cluster to change the StorageClass?
Specifically, I want to achieve a rolling update that would:
Replace all PVCs and PVs in the ZooKeeper and kafka pods
Use the new default StorageClass (or any other class)
Preserve all data across the cluster
Maintain cluster stability during the transition (no downtime)
Beta Was this translation helpful? Give feedback.
All reactions