Hello Self-Hosters,

What is the best practice for backing up data from docker as a self-hoster looking for ease of maintenance and foolproof backups? (pick only one :D )

Assume directories with user data are mapped to a NAS share via NFS and backups are handled separately.

My bigger concern here is how do you handle all the other stuff that is stored locally on the server, like caches, databases, etc. The backup target will eventually be the NAS and then from there it’ll be double-backed up to externals.

  1. Is it better to run #cp /var/lib/docker/volumes/* /backupLocation every once in a while, or is it preferable to define mountpoints for everything inside of /home/user/Containers and then use a script to sync it to wherever you keep backups? What pros and cons have you seen or experienced with these approaches?

  2. How do you test your backups? I’m thinking about digging up an old PC to use to test backups. I assume I can just edit the ip addresses in the docker compose, mount my NFS dirs, and failover to see if it runs.

  3. I started documenting my system in my notes and making a checklist for what I need to backup and where it’s stored. Currently trying to figure out if I want to move some directories for consistency. Can I just do docker-compose down edit the mountpoints in docker-compose.yml and run docker-compose up to get a working system?

  • njordomir@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 hours ago

    Interesting username. Are you a fellow student of Internet Comment Etiquette?

    I know at least some of my containers use Postgres. Glad to know I inadvertently might have made it easier on myself. I’ll have to look into the users for the db and db containers. I’m a bit lost on that. I know my db has a username and pass I set in the docker compose file and that the process is running with a particular GID UID or whatever. Is that what your talking about?

    • glizzyguzzler@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      I do not know of Internet Comment Etiquette, sorry to disappoint! It’s a username that’s humorous to me and fits a core tenant of mine

      Do remember (or put in the .env) the user/pass for your db’s, but they don’t matter much if you know them.

      I’m talking about the process, the ‘user: 6969:6969’ in the docker.compose file. If it’s not there the container runs as the user running docker, and unless you’ve got non-root docker going it’ll run the containers as root. Which could be bad, so head that off if you can! Overall, I’d say it’s a low priority, but a real priority. Some naughty container could do bad things with root privilege and some docker vulnerabilities. I’ve never heard of it that kind of attack in the self hosted community, but as self hosting gains traction I worry a legit container will get an attack slipped in somehow and wreck (prob ransomware) root docker installations.

      First priority is backup - then you can worry about removing root containers (if you haven’t already done so!).