Hello Self-Hosters,
What is the best practice for backing up data from docker as a self-hoster looking for ease of maintenance and foolproof backups? (pick only one :D )
Assume directories with user data are mapped to a NAS share via NFS and backups are handled separately.
My bigger concern here is how do you handle all the other stuff that is stored locally on the server, like caches, databases, etc. The backup target will eventually be the NAS and then from there it’ll be double-backed up to externals.
-
Is it better to run
cp /var/lib/docker/volumes/* /backupLocation
every once in a while, or is it preferable to define mountpoints for everything inside of/home/user/Containers
and then use a script to sync it to wherever you keep backups? What pros and cons have you seen or experienced with these approaches? -
How do you test your backups? I’m thinking about digging up an old PC to use to test backups. I assume I can just edit the ip addresses in the docker compose, mount my NFS dirs, and failover to see if it runs.
-
I started documenting my system in my notes and making a checklist for what I need to backup and where it’s stored. Currently trying to figure out if I want to move some directories for consistency. Can I just do
docker-compose down
edit the mountpoints indocker-compose.yml
and rundocker-compose up
to get a working system?
Bind mounts. I’ve never bothered to figure out named volumes, since I often work with the contents outside Docker. Then I just back up the whole proxmox VM. (Yes I’m aware proxmox supports containers, no I don’t plan to convert, that’s more time and effort for no meaningful gain to me.)
You can restore that backup to a new VM. I just make sure it boots and I can access the files. Turn off networking before you boot it so that it doesn’t cause conflicts.
After getting a NAS to replace my raspberry pi 4 as a home server, I literally just SCPd the bind mounts and docker compose folder, adjusted a few env variables (and found out of a few I needed to add for things like the uid/guid the NAS used as default for the media user I created) and it took maybe 30 minutes total to be back and running. Highly agree with you from experience.
Yup, it works great. I actually did it myself when migrating from a centos to debian host. Worked first try, no issues (except one thing that was already broken but I didn’t know because I hadn’t accessed it recently). Containers are great for this.
Yep, bind mount the data and config directories and back those up. You can test a backup by spinning up a new container with the data/config directories.
This is both easy and generally the recommended thing I’ve seen for many services.
The only thing that could cause issues is breaking changes caused by the docker images themselves, but that’s an issue regardless of backup strategy.