[Users] openvz-diff-backups - survival guide - part one

jjs - mainphrame jjs at mainphrame.com
Sat Sep 12 03:26:00 MSK 2020


Thanks for this, it's better than what we had before.

Jake

On Fri, Sep 11, 2020 at 5:24 PM tranxene50 <
tranxene50 at openvz-diff-backups.fr> wrote:

> Hello!
>
> Here is the first part of a quick "survival" guide in order to start off
> on the right foot with openvz-diff-backups (OVZDB for short).
>
> Please, be aware that English is not my native language. So, if you see
> something weird, please quote the sentence and correct it.
>
> Equally, if something is not clear, quote and ask: I will try to answer
> the best as I can.
>
> # ---------------------
>
> Firstly, you need to be aware that OVZDB use three
> "hosts/locations/storages" and "navigate" through them:
>
> # ---------------------
>
> - SOURCE : "host" where OVZDB is installed
>
> Most of the time, this is the server on which OpenVZ is running the
> containers you want to backup.
>
> But it can be any *nix system (with Bash/OpenSSH/rsync) in order to
> replicate (upload or download) backups between REMOTE and MASTER.
>
> Everything works over SSH as follow: SOURCE -> SSH key 1 -> MASTER ->
> SSH key 2 -> REMOTE
>
> # ---------------------
>
> - MASTER : *mandatory* "host" where backups are stored (copy A)
>
> Ideally, MASTER is a dedicated server/VPS/other because OVZDB relies on
> IOPS and, the more RAM you will have to cache dentries and inodes, the
> faster OVZDB will be.
>
> However, by default, backups are stored on the the same server
> (MASTER_SSH_PATH="root at localhost:/home/backup/openvz-diff-backups").
>
> This is useful if you want to test ASAP or if you have a secondary drive
> where backups can be stored (ex: sda for OpenVZ, sdb for backups).
>
> In this case, SOURCE will communicate with MASTER (both being on the
> same server) using SSH through localhost: as soon as "ssh -p 22
> root at 127.0.0.1" gives you a shell without asking for a password, you are
> done.
>
> On the contrary, if MASTER is a distant host (recommended), you need to
> adjust MASTER_SSH_PATH parameter.
>
> Ex:
> MASTER_SSH_PATH="root at backup.my-server.net:/any-absolute-path-you-want"(trailing
>
> slash is not needed and "backup.my-server.net" will always be resolved
> to its IPV4 or IPV6 address)
>
> If you need to use a SSH port different from 22, please see
> MASTER_SSH_OPTIONS parameter in config file (openvz-diff-backups.conf).
>
> # ---------------------
>
> - REMOTE : *optional* host where backups are replicated (copy B)
>
> In order to secure backups, you may want to replicate them, if possible,
> in a different geographical location.
>
> MASTER/REMOTE "hosts" can be anything as long as a *nix system is
> present with a shell, OpenSSH (other SSH servers have not been tested
> yet) and, the most important, rsync.
>
> This can be a big fat dedicated server, a large VPS, a medium instance
> in the Cloud, a NAS at home or even - if someone is willing to test (I
> didn't because mine is too old) - an Android smartphone...
>
> SOURCE "host" always requires a Bash shell but MASTER/REMOTE "hosts"
> only need a shell (sh/dash/ash/etc) and OVZDB can also deal with
> "Busybox" instead of using standard Unix tools.
>
> In short, OVZDB does not care and will run as long as the "host" can
> handle it (which can take hours/days on very low-end hardware).
>
> # ---------------------
>
>  From SOURCE, you can launch any task (more details in part 2):
>
> - backup task will "convert" containers present on SOURCE into backups
> on MASTER
>
> - restore task will "convert" backups present on MASTER into containers
> on SOURCE
>
> - upload task will replicate backups present on MASTER to REMOTE (push)
>
> - download task will replicate backups present on REMOTE to MASTER (pull)
>
> - delete task will remove backups present on MASTER and/or REMOTE(you
> choose)
>
> - destroy task will wipe "cache" present on MASTER and/or REMOTE (more
> in part 2 because it is not intuitive)
>
> - update task will check and/or update OVZDB to its latest version
> ("one-click" upgrade)
>
> # ---------------------
>
> Before going into details about each command, here are some use case
> scenarios about backups:
>
> (to be shorter, I will not talk about migrating IP addresses, adjusting
> firewalls, replacing a dedicated server and other things)
>
> - 1 server
>
> Your only choice is to store backups on the same server, if possible on
> a secondary hard drive or, better, on an external hard drive.
>
> Long story short, if you are a believer, pray! ^^
>
> - 2 servers(one for prod, one for backup)
>
> If you have enough space, store backups on prod server (copy A) and
> replicate them (push) on backup server (copy B).
>
> (or, better, on backup server, replicate backups using "pull" mode: this
> is safer because it would require that both server are compromised to
> loose all your backups)
>
> Then, use OVZDB on backup server and restore every container on a daily
> basis to speed things in the event of an emergency "switch".
>
> This way, if prod server crash, you can restore containers on backup
> server and, because most files are already synced, you will be online
> again quickly.
>
> - 2 servers(both prod)
>
> If you have enough space (bis), store backups - of containers of each
> prod server - locally (copy A) and replicate them on the other prod
> server (copy B).
>
> (since both servers have root access to each other, using "pull" or
> "push" modes are equals: if one server is compromised, you are screwed).
>
> Or, you can create OpenVZ containers on both servers to restrict access
> to backups.
>
> This requires that prod A have no SSH keys in order to access to prod B
> and inversely.
>
> Prod A will use container A to store its backups (same for prod B with
> its container B) and then, you can use "pull" mode.
>
> Prod B will download backups from "restricted" container A and Prod A
> will download backups from "restricted" container B (this way, if a
> server is compromised, you still have backups).
>
> *WARNING: never, ever, store OVZDB backups in a container using Ploop
> layout: it will get insanely fat and "ploop balloon discard" won't help
> much*
>
> Instead, use bindfs to mount a directory from the host into the container.
>
> Then, again on a regular basis, restore containers from prod A on prod B
> and - you have guessed - restore containers from prod B on prod A.
>
> If one server crash, containers from the other server will be almost
> ready to ready to start: just one final restore and you are "done".
>
> - 3 servers(one for prod, one for backup, one for rescue)
>
> Ideal but may be costly.
>
> Store backups on backup server (in a different data center) and
> replicate them on rescue server (in a different geographical location).
>
> If backup server can handle the load of prod server, restore containers
> regularly on it in order to be ready to "switch" ASAP on it if prod crash.
>
> Rescue server can use "pull" mode to replicate backups (download): this
> way, if prod and backup servers are compromised, you still have backups.
>
> - 3 servers(two for prod, one for backup)
>
> If possible, store backups - of containers of each prod server - locally
> (copy A) and replicate them on the other server (copy B).
>
> Then use backup server to "pull" backups (if prod A and B are
> compromised, you still have backups).
>
> Or, but this is highly dangerous, store all backups from prod servers on
> backup server (push).
>
> - 3 servers(all prod)
>
> See "2 servers(both prod)"
>
> - 4 servers(3 for prod, one for backup)
>
> See "3 servers(two for prod, one for backup)"
>
> - more than 4 servers
>
> At this point, I assume that you are using XFS or a distributed
> filesystem (Ceph?).
>
> - more than 10 servers
>
> You know the drill, the only thing to know is that OVZDB needs IOPS and
> RAM in order, for the kernel, to cache inodes/entries.
>
> And, if you have 10 Gbits network cards, consider syncing and
> de-duplicating "root.hdd" using brute force. ^^
>
> # ---------------------
>
> This is all for today!
>
> Tomorrow, or later, I will explain each task:
> backup/restore/delete/upload/download in more details.
>
> --
> tranxene50
> tranxene50 at openvz-diff-backups.fr
>
> _______________________________________________
> Users mailing list
> Users at openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20200911/15e7b14f/attachment.html>


More information about the Users mailing list