Skip to content

[DOC] - Consider adding a k8s job in backup/restore docs #462

Open
@Adam-D-Lewis

Description

@Adam-D-Lewis

Preliminary Checks

Summary

Consider adding k8s job to file system backup. Using a k8s job is useful over a simple pod when the file system is large and copying all the data takes a long time. If you try and tar everything up from jupyterlab then your server can timeout due to inactivity before copying everything into a tarball. A k8s job gets around this e.g. Something like

kind: Job
apiVersion: batch/v1
metadata:
  name: backup
  namespace: dev
spec:
  template:
    spec:
      volumes:
        - name: backup-volume
          persistentVolumeClaim:
            claimName: "jupyterhub-dev-share"
      containers:
        - name: debugger
          image: ubuntu
          command: ["/bin/bash", "-c", "cd /data && tar -cvpzf 2024-03-08-shared.tar.gz shared && echo 'Backup complete' > backup.txt"]
          volumeMounts:
            - mountPath: "/data"
              name: backup-volume
      restartPolicy: OnFailure

and for restore

kind: Job
apiVersion: batch/v1
metadata:
  name: restore
  namespace: dev
spec:
  template:
    spec:
      volumes:
        - name: backup-volume
          persistentVolumeClaim:
            claimName: "jupyterhub-dev-share"
      containers:
        - name: debugger
          image: ubuntu
          command: ["/bin/bash", "-c", "cd /data && tar -xvpzf 2024-03-08-shared.tar.gz --skip-old-files && echo 'Restore complete' > restore2.txt"]
          volumeMounts:
            - mountPath: "/data"
              name: backup-volume
      restartPolicy: OnFailure

Steps to Resolve this Issue

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    • Status

      Todo 📬

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions