|
| 1 | +--- |
| 2 | +id: kastenbrfs |
| 3 | +title: Kasten Backup and Restore using Replicated PV Mayastor Snapshots - FileSystem |
| 4 | +keywords: |
| 5 | + - Kasten Backup and Restore using Replicated PV Mayastor Snapshots - FileSystem |
| 6 | + - Kasten Backup and Restore |
| 7 | + - FileSystem |
| 8 | + - OpenEBS |
| 9 | +description: In this document, you learn about Kasten Backup and Restore using Replicated PV Mayastor Snapshots - FileSystem. |
| 10 | +--- |
| 11 | + |
| 12 | +# Kasten Backup and Restore using Replicated PV Mayastor Snapshots - FileSystem |
| 13 | + |
| 14 | +Using Kasten K10 for backup and restore operations with Replicated PV Mayastor snapshots combines the strengths of both tools, providing a robust, high-performance solution for protecting your Kubernetes applications and data. This integration ensures that your stateful applications are protected, with the flexibility to recover quickly from any failure or to migrate data as needed, ensuring your data is always protected and recoverable. |
| 15 | + |
| 16 | +In this guide, we will utilize Kasten to create a backup of a sample Nginx application with a Replicated PV Mayastor from a cluster, transfer the backup to an object store, and restore it on a different cluster. |
| 17 | + |
| 18 | +## Requirements |
| 19 | + |
| 20 | +### Replicated PV Mayastor |
| 21 | + |
| 22 | +Replicated PV Mayastor, a high-performance, container-native storage that provides persistent storage for Kubernetes applications. It supports various storage types, including local disks, NVMe, and more. It integrates with Kubernetes and provides fast, efficient snapshots and creates snapshots at the storage layer, providing a point-in-time copy of the data. These snapshots are highly efficient and consume minimal space, as they only capture changes since the last snapshot. |
| 23 | + |
| 24 | +Make sure that Replicated PV Mayastor has been installed, pools have been configured, and applications have been deployed before proceeding to the next step. Refer to the [OpenEBS Installation Documentation](../../quickstart-guide/installation.md#installation-via-helm) for more details. |
| 25 | + |
| 26 | +### Kasten K10 |
| 27 | + |
| 28 | +Kasten K10, a Kubernetes-native data management platform that offers backup, disaster recovery, and application mobility for Kubernetes applications. It automates and orchestrates backup and restore operations, making it easier to protect Kubernetes applications and data. Kasten K10 integrates with Replicated PV Mayastor to orchestrate the snapshot creation process. This ensures that snapshots are consistent with the application's state, making them ideal for backup and restore operations. Refer to the [Kasten Documentation](https://docs.kasten.io/latest/index.html) for more details. |
| 29 | + |
| 30 | +## Details of Setup |
| 31 | + |
| 32 | +### Install Kasten |
| 33 | + |
| 34 | +Install Kasten (V7.0.5) using `helm`. Refer to the [Kasten Documentation](https://docs.kasten.io/7.0.5/install/requirements.html#prerequisites) to view the prerequisites and pre-flight checks. |
| 35 | + |
| 36 | +As an example, we will be using `openebs-hostpath` storageclass as a global persistence storageclass for the Kasten installation. |
| 37 | + |
| 38 | +1. Install Kasten. |
| 39 | + |
| 40 | +``` |
| 41 | +helm install k10 kasten/k10 --namespace=kasten-io --set global.persistence.storageClass=openebs-hostpath |
| 42 | +``` |
| 43 | + |
| 44 | +2. Verify that Kasten has been installed correctly. |
| 45 | + |
| 46 | +**Command** |
| 47 | + |
| 48 | +``` |
| 49 | +kubectl get po -n kasten-io |
| 50 | +``` |
| 51 | + |
| 52 | +**Output** |
| 53 | + |
| 54 | +``` |
| 55 | +NAME READY STATUS RESTARTS AGE |
| 56 | +aggregatedapis-svc-6cff958895-4kq8j 1/1 Running 0 4m11s |
| 57 | +auth-svc-7f48c794f-jmw4k 1/1 Running 0 4m10s |
| 58 | +catalog-svc-55798f8dc-m7nqm 2/2 Running 0 4m11s |
| 59 | +controllermanager-svc-85679687f-7c5t2 1/1 Running 0 4m11s |
| 60 | +crypto-svc-7f9bbbbccd-g4vf7 4/4 Running 0 4m11s |
| 61 | +dashboardbff-svc-77d8b59b4d-lxtrr 2/2 Running 0 4m11s |
| 62 | +executor-svc-598dd65578-cmvnt 1/1 Running 0 4m11s |
| 63 | +executor-svc-598dd65578-d8jnx 1/1 Running 0 4m11s |
| 64 | +executor-svc-598dd65578-p52h5 1/1 Running 0 4m11s |
| 65 | +frontend-svc-7c97bf4c7d-xmtb7 1/1 Running 0 4m11s |
| 66 | +gateway-68cdc7846-x9vw4 1/1 Running 0 4m11s |
| 67 | +jobs-svc-7489b594c4-zkx4n 1/1 Running 0 4m11s |
| 68 | +k10-grafana-5fdccfbc5c-jtlms 1/1 Running 0 4m11s |
| 69 | +kanister-svc-9f47747f5-kt4r6 1/1 Running 0 4m10s |
| 70 | +logging-svc-6846c585d8-hqkt2 1/1 Running 0 4m11s |
| 71 | +metering-svc-64d847f4c7-chk9s 1/1 Running 0 4m11s |
| 72 | +prometheus-server-cbd4d5d8c-vql87 2/2 Running 0 4m11s |
| 73 | +state-svc-84c9bcc968-n2hg4 3/3 Running 0 4m11s |
| 74 | +``` |
| 75 | + |
| 76 | +### Create VolumeSnapshotClass |
| 77 | + |
| 78 | +Whenever Kasten identifies volumes that were provisioned via a CSI driver, it will search for a VolumeSnapshotClass with Kasten annotation for the identified CSI driver. It will then utilize this to create snapshots. |
| 79 | + |
| 80 | +Create a `VolumeSnapshotClass` with the following yaml: |
| 81 | + |
| 82 | +``` |
| 83 | +apiVersion: snapshot.storage.k8s.io/v1 |
| 84 | +kind: VolumeSnapshotClass |
| 85 | +metadata: |
| 86 | + annotations: |
| 87 | + k10.kasten.io/is-snapshot-class: "true" |
| 88 | + name: csi-mayastor-snapshotclass |
| 89 | +deletionPolicy: Delete |
| 90 | +driver: io.openebs.csi-mayastor |
| 91 | +``` |
| 92 | + |
| 93 | +### Validate Dashboard Access |
| 94 | + |
| 95 | +Use the following `kubectl` command to forward a local port to the Kasten ingress port or change the 'svc' type from **ClusterIP** to **NodePort** to establish a connection to it. |
| 96 | + |
| 97 | +:::note |
| 98 | +By default, the Kasten dashboard is not exposed externally. |
| 99 | +::: |
| 100 | + |
| 101 | +- Forward a local port to the Kasten ingress port. |
| 102 | + |
| 103 | +``` |
| 104 | +kubectl --namespace kasten-io port-forward service/gateway 8080:80 |
| 105 | +``` |
| 106 | + |
| 107 | +or |
| 108 | + |
| 109 | +- Change the 'svc' type as **NodePort**. |
| 110 | + |
| 111 | +In this example, we have changed the 'svc' type to **NodePort**. |
| 112 | + |
| 113 | +**Command** |
| 114 | + |
| 115 | +``` |
| 116 | +kubectl patch svc gateway -n kasten-io -p '{"spec": {"type": "NodePort"}}' |
| 117 | +``` |
| 118 | + |
| 119 | +**Output** |
| 120 | + |
| 121 | +``` |
| 122 | +service/gateway patched |
| 123 | +``` |
| 124 | + |
| 125 | +**Command** |
| 126 | + |
| 127 | +``` |
| 128 | +kubectl get svc -n kasten-io |
| 129 | +``` |
| 130 | + |
| 131 | +**Output** |
| 132 | + |
| 133 | +``` |
| 134 | +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE |
| 135 | +aggregatedapis-svc ClusterIP 10.98.172.242 <none> 443/TCP 4m57s |
| 136 | +auth-svc ClusterIP 10.100.81.56 <none> 8000/TCP 4m57s |
| 137 | +catalog-svc ClusterIP 10.104.93.115 <none> 8000/TCP 4m57s |
| 138 | +controllermanager-svc ClusterIP 10.98.166.83 <none> 8000/TCP,18000/TCP 4m57s |
| 139 | +crypto-svc ClusterIP 10.104.77.142 <none> 8000/TCP,8003/TCP,8001/TCP,8002/TCP 4m57s |
| 140 | +dashboardbff-svc ClusterIP 10.105.136.20 <none> 8000/TCP,8001/TCP 4m57s |
| 141 | +executor-svc ClusterIP 10.100.151.70 <none> 8000/TCP 4m57s |
| 142 | +frontend-svc ClusterIP 10.100.151.96 <none> 8000/TCP 4m57s |
| 143 | +gateway NodePort 10.111.123.227 <none> 80:30948/TCP 4m57s |
| 144 | +gateway-admin ClusterIP 10.105.197.120 <none> 8877/TCP 4m57s |
| 145 | +jobs-svc ClusterIP 10.105.46.23 <none> 8000/TCP 4m57s |
| 146 | +k10-grafana ClusterIP 10.106.140.98 <none> 80/TCP 4m57s |
| 147 | +kanister-svc ClusterIP 10.108.254.197 <none> 8000/TCP 4m57s |
| 148 | +logging-svc ClusterIP 10.100.192.252 <none> 8000/TCP,24224/TCP,24225/TCP 4m57s |
| 149 | +metering-svc ClusterIP 10.111.201.5 <none> 8000/TCP 4m57s |
| 150 | +prometheus-server ClusterIP 10.99.223.19 <none> 80/TCP 4m57s |
| 151 | +prometheus-server-exp ClusterIP 10.109.89.22 <none> 80/TCP 4m57s |
| 152 | +state-svc ClusterIP 10.99.183.141 <none> 8000/TCP,8001/TCP,8002/TCP 4m57s |
| 153 | +``` |
| 154 | + |
| 155 | +Now, we have the dashboard accessible as below: |
| 156 | + |
| 157 | + |
| 158 | + |
| 159 | +### Add s3 Location Profile |
| 160 | + |
| 161 | +Location profiles help with managing backups and moving applications and their data. They allow you to create backups, transfer them between clusters or clouds, and import them into a new cluster. Click **Add New** on the profiles page to create a location profile. |
| 162 | + |
| 163 | + |
| 164 | + |
| 165 | +The `GCP Project ID` and `GCP Service Key` fields are mandatory. The `GCP Service Key` takes the complete content of the service account JSON file when creating a new service account. As an example, we will be using Google Cloud Bucket from Google Cloud Platform (GCP). Refer to the [Kasten Documentation](https://docs.kasten.io/latest/install/storage.html) for more information on profiles for various cloud environments. |
| 166 | + |
| 167 | +:::important |
| 168 | +Make sure the service account has the necessary permissions. |
| 169 | +::: |
| 170 | + |
| 171 | +## Application Snapshot - Backup and Restore |
| 172 | + |
| 173 | +### From Source Cluster |
| 174 | + |
| 175 | +In this example, We have deployed a sample Nginx test application with a Replicated PV Mayastor PVC where volume mode is Filesystem. |
| 176 | + |
| 177 | +**Application yaml** |
| 178 | + |
| 179 | +``` |
| 180 | +apiVersion: apps/v1 |
| 181 | +kind: Deployment |
| 182 | +metadata: |
| 183 | + name: test |
| 184 | + namespace: test |
| 185 | +spec: |
| 186 | + replicas: 1 # You can increase this number if you want more replicas. |
| 187 | + selector: |
| 188 | + matchLabels: |
| 189 | + app: test |
| 190 | + template: |
| 191 | + metadata: |
| 192 | + labels: |
| 193 | + app: test |
| 194 | + spec: |
| 195 | + nodeSelector: |
| 196 | + kubernetes.io/os: linux |
| 197 | + containers: |
| 198 | + - image: nginx |
| 199 | + name: nginx |
| 200 | + command: [ "sleep", "1000000" ] |
| 201 | + volumeMounts: |
| 202 | + - name: claim |
| 203 | + mountPath: "/volume" |
| 204 | + volumes: |
| 205 | + - name: claim |
| 206 | + persistentVolumeClaim: |
| 207 | + claimName: mayastor-pvc |
| 208 | +``` |
| 209 | + |
| 210 | +**PVC yaml** |
| 211 | + |
| 212 | +``` |
| 213 | +apiVersion: v1 |
| 214 | +kind: PersistentVolumeClaim |
| 215 | +metadata: |
| 216 | + name: mayastor-pvc |
| 217 | + namespace: test |
| 218 | +spec: |
| 219 | + accessModes: |
| 220 | + - ReadWriteOnce |
| 221 | + resources: |
| 222 | + requests: |
| 223 | + storage: 1Gi |
| 224 | + storageClassName: mayastor-thin-multi-replica |
| 225 | + volumeMode: Filesystem |
| 226 | +``` |
| 227 | + |
| 228 | +**Command** |
| 229 | + |
| 230 | +``` |
| 231 | +kubectl get po -n test |
| 232 | +``` |
| 233 | + |
| 234 | +**Output** |
| 235 | + |
| 236 | +``` |
| 237 | +NAME READY STATUS RESTARTS AGE |
| 238 | +test-cd9847c9c-wc6th 1/1 Running 0 25s |
| 239 | +``` |
| 240 | + |
| 241 | +**Command** |
| 242 | + |
| 243 | +``` |
| 244 | +kubectl get pvc -n test |
| 245 | +``` |
| 246 | + |
| 247 | +**Output** |
| 248 | + |
| 249 | +``` |
| 250 | +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE |
| 251 | +mayastor-pvc Bound pvc-b5baa4cf-b126-42e6-b11f-3e20aeb3ab7b 1Gi RWO mayastor-thin-multi-replica 54s |
| 252 | +``` |
| 253 | + |
| 254 | +**Sample Data** |
| 255 | + |
| 256 | +``` |
| 257 | +kubectl exec -it test-cd9847c9c-wc6th -n test -- bash |
| 258 | +root@test-cd9847c9c-wc6th:/# cd volume/ |
| 259 | +root@test-cd9847c9c-wc6th:/volume# cat abc |
| 260 | +Mayastor Kasten backup and restore |
| 261 | +root@test-cd9847c9c-wc6th:/volume# |
| 262 | +``` |
| 263 | + |
| 264 | +#### Applications from Kasten Dashboard |
| 265 | + |
| 266 | +By default, the Kasten platform equates namespaces to applications. Since we have already installed the applications, clicking the Applications card on the dashboard will take us to the following view: |
| 267 | + |
| 268 | + |
| 269 | + |
| 270 | +#### Create Policies from Kasten Dashboard |
| 271 | + |
| 272 | +Policies are implemented in Kasten to streamline/automate data management workflows. A section on the management of policies is located adjacent to the Applications card on the primary dashboard. To accomplish this, they integrate actions that you wish to execute (e.g., snapshots), a frequency or schedule for how often you want to take that action, and selection criteria for the resources you want to manage. |
| 273 | + |
| 274 | +:::important |
| 275 | +Policy can be either created from this page or from the application page. Users should “Enable backups via Snapshot exports” to export applications to s3. |
| 276 | +::: |
| 277 | + |
| 278 | +- Select the application. In this example, it is “test”. |
| 279 | + |
| 280 | + |
| 281 | + |
| 282 | +- Click **Create Policy** once all necessary configurations are done. |
| 283 | + |
| 284 | + |
| 285 | + |
| 286 | +- Click **Show import details** to get the import key. Without the import key, users are unable to import this on the target cluster. |
| 287 | + |
| 288 | + |
| 289 | + |
| 290 | +- When you create an import policy on the receiving cluster, you need to copy and paste the encoded text that is displayed in the **Importing Data** dialog box. |
| 291 | + |
| 292 | + |
| 293 | + |
| 294 | +#### Backup |
| 295 | + |
| 296 | +Once the policies have been created, it is possible to run the backup. In this scenario, we have created policies to run “on-demand”. A snapshot can be scheduled based on the available options. Example: hourly/weekly |
| 297 | + |
| 298 | +- Select **Run Once > Yes, Continue** to trigger the snapshot. Backup operations convert application and volume snapshots into backups by transforming them into an infrastructure-independent format and then storing them in a target location (Google Cloud Bucket). |
| 299 | + |
| 300 | + |
| 301 | + |
| 302 | +You can monitor the status of the snapshots and export them from the Dashboard. The backup had been successfully completed and exported to the storage location. |
| 303 | + |
| 304 | +### From Target Cluster |
| 305 | + |
| 306 | +Make sure that Replicated PV Mayastor has been installed, pools have been configured, and storageclasses are created (same as a backup cluster) before restoring the target cluster. Refer to the [OpenEBS Installation Documentation](../../quickstart-guide/installation.md#installation-via-helm) for more details. |
| 307 | + |
| 308 | +Make sure that Kasten has been installed, volumesnapshotclass is created, and the dashboard is accessible before restoring the target cluster. Refer to the [Install Kasten section](#install-kasten) for more details. |
| 309 | + |
| 310 | +:::note |
| 311 | +The Location profile must be located in the exact same location as our backup, otherwise the restore would be unsuccessful. |
| 312 | +::: |
| 313 | + |
| 314 | +We have completed the backup process and followed the above configurations on the restore cluster. Therefore, the dashboard is now available for the target cluster. |
| 315 | + |
| 316 | + |
| 317 | + |
| 318 | +#### Create Import Policy |
| 319 | + |
| 320 | +Create an import policy to restore the application from the backup. Click **Create New Policy** and enable **Restore After Import** to restore the applications once imported. If this is not enabled, you have to manually restore the applications from the import metadata available from the Dashboard. |
| 321 | + |
| 322 | + |
| 323 | + |
| 324 | +The policy is being developed with an on-demand import frequency. In the Import section of the configuration data paste the text that was displayed when the restore point was exported from the source cluster. |
| 325 | + |
| 326 | + |
| 327 | + |
| 328 | +Click **Create New Policy** once all necessary configurations are done. |
| 329 | + |
| 330 | + |
| 331 | + |
| 332 | +#### Restore |
| 333 | + |
| 334 | +Once the policies are created, the import and restore processes can be initiated by selecting **Run Once**. |
| 335 | + |
| 336 | + |
| 337 | + |
| 338 | +Restore has been successfully completed. |
| 339 | + |
| 340 | + |
| 341 | + |
| 342 | +### Verify Application and Data |
| 343 | + |
| 344 | +Use the following command to verify the application and the data. |
| 345 | + |
| 346 | +**Command** |
| 347 | + |
| 348 | +``` |
| 349 | +kubectl get pvc -n test |
| 350 | +``` |
| 351 | + |
| 352 | +**Output** |
| 353 | + |
| 354 | +``` |
| 355 | +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE |
| 356 | +mayastor-pvc Bound pvc-dee0596a-5382-4eae-9cc2-82e79403df58 1Gi RWO mayastor-thin-multi-replica 33s |
| 357 | +``` |
| 358 | + |
| 359 | +**Command** |
| 360 | + |
| 361 | +``` |
| 362 | +kubectl get po -n test |
| 363 | +``` |
| 364 | + |
| 365 | +**Output** |
| 366 | + |
| 367 | +``` |
| 368 | +NAME READY STATUS RESTARTS AGE |
| 369 | +test-cd9847c9c-s922r 1/1 Running 0 38s |
| 370 | +``` |
| 371 | + |
| 372 | +**Sample Data** |
| 373 | + |
| 374 | +``` |
| 375 | +root@master-restore:~# kubectl exec -it test-cd9847c9c-s922r -n test -- bash |
| 376 | +root@test-cd9847c9c-s922r:/# cd volume/ |
| 377 | +root@test-cd9847c9c-s922r:/volume# cat abc |
| 378 | +Mayastor Kasten backup and restore |
| 379 | +root@test-cd9847c9c-s922r:/volume# exit |
| 380 | +exit |
| 381 | +root@master-restore:~# |
| 382 | +``` |
| 383 | + |
| 384 | +The PVC/Deployment has been restored as expected. |
| 385 | + |
| 386 | +## See Also |
| 387 | + |
| 388 | +- [Velero Backup and Restore using Replicated PV Mayastor Snapshots - Raw Block Volumes](velero-br-rbv.md) |
| 389 | +- [Replicated PV Mayastor Installation on MicroK8s](../openebs-on-kubernetes-platforms/microkubernetes.md) |
| 390 | +- [Replicated PV Mayastor Installation on Talos](../openebs-on-kubernetes-platforms/talos.md) |
| 391 | +- [Replicated PV Mayastor Installation on Google Kubernetes Engine](../openebs-on-kubernetes-platforms/gke.md) |
| 392 | +- [Provisioning NFS PVCs](../read-write-many/nfspvc.md) |
0 commit comments