|
| 1 | +--- |
| 2 | +title: Snapshot Mount Support for LVM-LocalPV |
| 3 | +authors: |
| 4 | + - "@wowditi" |
| 5 | +owners: |
| 6 | + - "@kmova" |
| 7 | +creation-date: 2022-03-31 |
| 8 | +last-updated: 2022-03-31 |
| 9 | +status: Request for Comment |
| 10 | +--- |
| 11 | + |
| 12 | +# Snapshot Mount Support for LVM-LocalPV |
| 13 | + |
| 14 | +## Table of Contents |
| 15 | + |
| 16 | +* [Table of Contents](#table-of-contents) |
| 17 | +* [Summary](#summary) |
| 18 | +* [Motivation](#motivation) |
| 19 | + * [Goals](#goals) |
| 20 | + * [Non-Goals](#non-goals) |
| 21 | +* [Proposal](#proposal) |
| 22 | +* [Implementation Details](#implementation-details) |
| 23 | +* [Test Plan](#test-plan) |
| 24 | +* [Graduation Criteria](#graduation-criteria) |
| 25 | +* [Drawbacks](#drawbacks) |
| 26 | + |
| 27 | +## Summary |
| 28 | + |
| 29 | +LVM Snapshots are space efficient and quick point in time copies of lvm volumes. It consumes the space only when changes are made to the source logical volume. For testing purposes it can be beneficial to start a new application, for example a database, using such a snapshot as the backing storage volume. This allows for creating an exact replica of the application at the moment the snapshot was taken (assuming the state is entirely depedent on the filesystem), which can used to debug issues that occured in production or test how a new version of the application (or an external application) would affect the data. |
| 30 | + |
| 31 | +## Motivation |
| 32 | + |
| 33 | +### Goals |
| 34 | + |
| 35 | +- user should be able to mount snapshots |
| 36 | +- user should be able to mount thick and thin snapshots |
| 37 | +- user should be able to mount snapshots as read only and copy on write |
| 38 | + |
| 39 | +### Non-Goals |
| 40 | + |
| 41 | +- Creating clones from Snapshots |
| 42 | +- restore of a snapshot |
| 43 | + |
| 44 | + |
| 45 | +## Proposal |
| 46 | + |
| 47 | +To mount a k8s lvmpv snapshot we need to create a persistent volume claim that references the snapshot as datasource: |
| 48 | + |
| 49 | +``` |
| 50 | +kind: PersistentVolumeClaim |
| 51 | +apiVersion: v1 |
| 52 | +metadata: |
| 53 | + name: csi-lvmpvc-snap |
| 54 | +spec: |
| 55 | + storageClassName: openebs-lvmpv |
| 56 | + dataSource: |
| 57 | + name: lvmpv-snap |
| 58 | + kind: VolumeSnapshot |
| 59 | + apiGroup: snapshot.storage.k8s.io |
| 60 | + accessModes: |
| 61 | + - ReadWriteOnce |
| 62 | + resources: |
| 63 | + requests: |
| 64 | + storage: 10Gi |
| 65 | +``` |
| 66 | + |
| 67 | +Defining resources is required but rather pointless since we also define the size of the snapshot in the snapshotclass. |
| 68 | +Although we could extend the lvm if the defined storage size is larger then that of the snapshot (with a maximum of the size of the original volume). |
| 69 | + |
| 70 | + |
| 71 | +``` |
| 72 | +$ kubectl get pvc csi-lvmpvc-snap -o yaml |
| 73 | +apiVersion: v1 |
| 74 | +kind: PersistentVolumeClaim |
| 75 | +metadata: |
| 76 | + annotations: |
| 77 | + kubectl.kubernetes.io/last-applied-configuration: | |
| 78 | + {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"csi-lvmpvc-snap","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"dataSource":{"apiGroup":"snapshot.storage.k8s.io","kind":"VolumeSnapshot","name":"lvmpv-snap"},"resources":{"requests":{"storage":"10Gi"}},"storageClassName":"openebs-lvmpv"}} |
| 79 | + pv.kubernetes.io/bind-completed: "yes" |
| 80 | + pv.kubernetes.io/bound-by-controller: "yes" |
| 81 | + volume.beta.kubernetes.io/storage-provisioner: local.csi.openebs.io |
| 82 | + volume.kubernetes.io/storage-provisioner: local.csi.openebs.io |
| 83 | + creationTimestamp: "2022-03-31T10:37:06Z" |
| 84 | + finalizers: |
| 85 | + - kubernetes.io/pvc-protection |
| 86 | + managedFields: |
| 87 | + - apiVersion: v1 |
| 88 | + fieldsType: FieldsV1 |
| 89 | + fieldsV1: |
| 90 | + f:metadata: |
| 91 | + f:annotations: |
| 92 | + f:pv.kubernetes.io/bind-completed: {} |
| 93 | + f:pv.kubernetes.io/bound-by-controller: {} |
| 94 | + f:volume.beta.kubernetes.io/storage-provisioner: {} |
| 95 | + f:volume.kubernetes.io/storage-provisioner: {} |
| 96 | + f:spec: |
| 97 | + f:volumeName: {} |
| 98 | + manager: kube-controller-manager |
| 99 | + operation: Update |
| 100 | + time: "2022-03-31T10:37:06Z" |
| 101 | + - apiVersion: v1 |
| 102 | + fieldsType: FieldsV1 |
| 103 | + fieldsV1: |
| 104 | + f:status: |
| 105 | + f:accessModes: {} |
| 106 | + f:capacity: |
| 107 | + .: {} |
| 108 | + f:storage: {} |
| 109 | + f:phase: {} |
| 110 | + manager: kube-controller-manager |
| 111 | + operation: Update |
| 112 | + subresource: status |
| 113 | + time: "2022-03-31T10:37:06Z" |
| 114 | + - apiVersion: v1 |
| 115 | + fieldsType: FieldsV1 |
| 116 | + fieldsV1: |
| 117 | + f:metadata: |
| 118 | + f:annotations: |
| 119 | + .: {} |
| 120 | + f:kubectl.kubernetes.io/last-applied-configuration: {} |
| 121 | + f:spec: |
| 122 | + f:accessModes: {} |
| 123 | + f:dataSource: {} |
| 124 | + f:resources: |
| 125 | + f:requests: |
| 126 | + .: {} |
| 127 | + f:storage: {} |
| 128 | + f:storageClassName: {} |
| 129 | + f:volumeMode: {} |
| 130 | + manager: kubectl |
| 131 | + operation: Update |
| 132 | + time: "2022-03-31T10:37:06Z" |
| 133 | + name: csi-lvmpvc-snap |
| 134 | + namespace: default |
| 135 | + resourceVersion: "10994" |
| 136 | + uid: 0217ba5b-0ba2-4e28-af2a-acb97c55e8b1 |
| 137 | +spec: |
| 138 | + accessModes: |
| 139 | + - ReadWriteOnce |
| 140 | + dataSource: |
| 141 | + apiGroup: snapshot.storage.k8s.io |
| 142 | + kind: VolumeSnapshot |
| 143 | + name: lvmpv-snap |
| 144 | + resources: |
| 145 | + requests: |
| 146 | + storage: 10Gi |
| 147 | + storageClassName: openebs-lvmpv |
| 148 | + volumeMode: Filesystem |
| 149 | + volumeName: pvc-0217ba5b-0ba2-4e28-af2a-acb97c55e8b1 |
| 150 | +status: |
| 151 | + accessModes: |
| 152 | + - ReadWriteOnce |
| 153 | + capacity: |
| 154 | + storage: 10Gi |
| 155 | + phase: Bound |
| 156 | +``` |
| 157 | + |
| 158 | +## Implementation Details |
| 159 | + |
| 160 | +In order to implement this only existing code needs to be changed, there is no need for an entirely new control flow. |
| 161 | +The changes below are the primary modules that need to be modified, however, there are some additional utility modules that will need to be modified to support these changes. Most likely the following files will need some additions to support snapshots: [kubernetes.go](../../pkg/builder/volbuilder/kubernetes.go) (in order to implement a `Kubeclient.GetSnapshot` function), [lvm_util.go](../../pkg/lvm/lvm_util.go) (in order to create a function that changes the snapshot write access) and [mount.go](../../pkg/lvm/mount.go) (in order to support mounting/unmounting of snapshots). |
| 162 | + |
| 163 | +- In the [controller.go](../../pkg/driver/controller.go) the code path in the `CreateVolume` function for the `controller` type that occurs when contentSource.GetSnapshot() is not `nil` needs to be implemented. When this path is triggered it should return the correct `volName`, `topology` and `cntx`. |
| 164 | +- In the [agent.go](../../pkg/driver/agent.go) the `NodePublishVolume` function for the `node` type needs to be changed such that it checks whether the `volName` is a snapshot and if so mount the snapshot to the specified location. |
| 165 | + - This also needs to change the write access to `rw` by using the lvchange command when the PersistentVolumeClaim specified the AccessMode as ReadWrite. Note that we do not need to change it back to read only, since we can limit the permissions of future mounts to read only by using the `MountOptions`. |
| 166 | + - Alternatively we could make the Snapshots writeable by default |
| 167 | +- In the [agent.go](../../pkg/driver/agent.go) the `NodeUnpublishVolume` function for the `node` type needs to be changed such that it unmounts the snapshot. |
| 168 | + |
| 169 | + |
| 170 | +## Test Plan |
| 171 | + |
| 172 | +- Create the PersistentVolumeClaim for the snapshot and verify that it is created successfully and that it can successfully be bound |
| 173 | +- Create the PersistentVolumeClaim for the thick snapshot with a storage size larger than the snapSize but less than the size of the original volume and verify that the storage size has increased |
| 174 | +- Create the PersistentVolumeClaim for the thick snapshot with a storage size larger than the size of the original volume and verify that the size has been set to exactly the size of the original volume |
| 175 | +- Create the PersistentVolumeClaim for the thin snapshot with storage size larger than that of the snapshot and verify that it has not been resized |
| 176 | +- Mount a PersistentVolumeClaim of a VolumeSnapshot to a pod and verify that it is mounted successfully |
| 177 | +- Delete the VolumeSnapshot and verify that all PersistentVolumeClaims that reference it have been deleted |
| 178 | +- Delete the VolumeSnapshot and verify that, once all the pods that have mounted a PersistentVolumeClaim with as dataSource the VolumeSnapshot have been deleted, the Snapshot is deleted |
| 179 | +- Mount the PersistentVolumeClaim to a pod as Read and validate that the filesystem is read only |
| 180 | +- Mount the PersistentVolumeClaim to a pod as ReadWrite and validate that the filesystem is writable |
| 181 | +- Verify the original volume is working fine after mounting the snapshot and that any changes to the snapshot are not propagated to the original volume and vice versa |
| 182 | +- Verify that creating a PersistentVolumeClaim for a non existing VolumeSnapshot fails |
| 183 | + |
| 184 | +## Graduation Criteria |
| 185 | + |
| 186 | +All testcases mentioned in [Test Plan](#test-plan) section need to be automated |
| 187 | + |
| 188 | +## Drawbacks |
| 189 | + |
| 190 | +- As far as I understand it, normally when creating a PersistentVolumeClaim with a snapshot as datasource it would clone the Snapshot into an actual volume, so we'd be diverging from this behaviour. |
| 191 | + - We could use an annotation to specify the behaviour that we are implementing here so that the clone behaviour can be added at a later time? |
0 commit comments