Open
Description
Preliminary Checks
- This issue is not a question, feature request, RFC, or anything other than a bug report. Please post those things in GitHub Discussions: https://github.com/nebari-dev/nebari/discussions
Summary
Consider adding k8s job to file system backup. Using a k8s job is useful over a simple pod when the file system is large and copying all the data takes a long time. If you try and tar everything up from jupyterlab then your server can timeout due to inactivity before copying everything into a tarball. A k8s job gets around this e.g. Something like
kind: Job
apiVersion: batch/v1
metadata:
name: backup
namespace: dev
spec:
template:
spec:
volumes:
- name: backup-volume
persistentVolumeClaim:
claimName: "jupyterhub-dev-share"
containers:
- name: debugger
image: ubuntu
command: ["/bin/bash", "-c", "cd /data && tar -cvpzf 2024-03-08-shared.tar.gz shared && echo 'Backup complete' > backup.txt"]
volumeMounts:
- mountPath: "/data"
name: backup-volume
restartPolicy: OnFailure
and for restore
kind: Job
apiVersion: batch/v1
metadata:
name: restore
namespace: dev
spec:
template:
spec:
volumes:
- name: backup-volume
persistentVolumeClaim:
claimName: "jupyterhub-dev-share"
containers:
- name: debugger
image: ubuntu
command: ["/bin/bash", "-c", "cd /data && tar -xvpzf 2024-03-08-shared.tar.gz --skip-old-files && echo 'Restore complete' > restore2.txt"]
volumeMounts:
- mountPath: "/data"
name: backup-volume
restartPolicy: OnFailure
Steps to Resolve this Issue
Metadata
Metadata
Assignees
Type
Projects
Status
Todo 📬