Replies: 4 comments 5 replies
-
I don't think this is currently possible, but I'm also not sure I really understand your setup.
Then what do you use
How would you imagine configuration for such a feature to work? I'm currently having a hard time coming up with an intuitive way of splitting the content other than resorting to multiple instances, each one of them responsible for a part of the data. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the reply. Basically, I've assigned scripts that run before/after Kopia snapshots, which run on their own schedule. In the pre script, I manually run the "backup" command inside the docker volume backup container. That spits out the volume backups as a single .tar into my local Kopia snapshot directory. Kopia then runs, uploads it to remote storage then runs the after script which deletes the local copy. I still find DVB useful for this case, because it does a few things that would take time to manually wire up like stopping labeled containers first (and is compatible with Swarm). In the future, I may end up using other features as it's a handy tool.
The simplest thing that comes to mind would be to add an env var flag that when true, saves separate archives per subdirectory of /backup into /archive. Another feature request relates to this as well I think, as they are also using a multiple compose file workflow. In my case, If I ran DVB containers for "each", I would have about 15 extra backup containers. |
Beta Was this translation helpful? Give feedback.
-
After some more testing, it seems that the containers with the docker-volume-backup.stop-during-backup=true label that are outside of the docker compose file that docker volume backup is in aren't actually stopping during the backups. I didn't see that covered in the docs, but I saw a couple others in issues that seemed to be doing the same as me (DVM in one compose file, with an external network linking services in other compose files) so I assumed it could work. If that's not possible, I'll need another solution for the setup. Then again, maybe its a swarm issue. I did try both methods on the docker swarm page and neither worked. Or at least running docker ps in a separate terminal window during the backup, showed them all running... |
Beta Was this translation helpful? Give feedback.
-
On the feature request, if you don't see it as a common use case I can implement it manually in my scripts. But even though that's possible, for performance reasons (handling the archiving process only once) it could be a handy feature. |
Beta Was this translation helpful? Give feedback.
-
Hi, thanks for your work. I think this is a feature request unless I haven't figured out how to accomplish what I'm trying to do.
Currently, I have a single docker-volume backup container backing up volumes from many other docker compose files via a shared network. Right now, each volume mounts to its own subfolder under /backup. Since I'm using Kopia to manage database and other backups, I've set the cron on DVB to never and just use Kopia pre/post scripts to start and cleanup the DVB backup. This is working pretty well, except I end up with one very large archive of all the backed up volumes. That's somewhat of a problem because it goes to Backblaze b2, where there is a maximum daily download size (for the free account)... which means at a certain size, at least I assume, it may become impossible to download.
Question: is there currently a way to output multiple backup archives within the same DVB instance? If not I think it would be a great addition :)
Beta Was this translation helpful? Give feedback.
All reactions