Skip to content
This repository was archived by the owner on Oct 22, 2021. It is now read-only.
This repository was archived by the owner on Oct 22, 2021. It is now read-only.

After enlarge database persistence disk size, no change applied to the database pod #1107

Open
@ShuangMen

Description

@ShuangMen

Is your feature request related to a problem? Please describe.
After kubecf deployed, try to enlarge the database persistence disk size with below change:
update values.yaml, change the database disk size from 20G to 40G

  database:
    instances: ~
    persistence:
      size: 20Gi

==>

  database:
    instances: ~
    persistence:
      size: 40Gi

then run helm upgrade, then pod database-0 terminated and start as a new one.

Then perform below check:

$k get qsts database  -n kubecf -o yaml
...
      volumeClaimTemplates:
      - metadata:
          name: pxc-data
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 40Gi
          storageClassName: new-ibmc-vpc-block-10iops-tier
...
...
$k get sts database  -n kubecf -o yaml

  volumeClaimTemplates:
  - apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      creationTimestamp: null
      name: pxc-data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      storageClassName: new-ibmc-vpc-block-10iops-tier
      volumeMode: Filesystem
    status:
      phase: Pending
...

login the pod database-0 and check the disk size, no change applied to the disk, still 20G.

$ k exec -it database-0 sh -n kubecf
Defaulting container name to database.
Use 'kubectl describe pod/database-0 -n kubecf' to see all of the containers in this pod.
sh-4.4# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          99G   47G   48G  50% /
tmpfs            64M     0   64M   0% /dev
tmpfs           7.3G     0  7.3G   0% /sys/fs/cgroup
/dev/vda2        99G   47G   48G  50% /root
shm              64M     0   64M   0% /dev/shm
/dev/vdf         20G  430M   20G   3% /var/lib/mysql
tmpfs           7.3G   16K  7.3G   1% /etc/mysql/tls/certs
tmpfs           7.3G   16K  7.3G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           7.3G     0  7.3G   0% /proc/acpi
tmpfs           7.3G     0  7.3G   0% /proc/scsi
tmpfs           7.3G     0  7.3G   0% /sys/firmware

Describe the solution you'd like
After enlarging the disk size and then applying the change with helm upgrade, there should be a smooth data migration from the original database disk to a new bigger disk and then have new disk mounted to the database pod.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions