Encrypted Backup-Server with Debian and Dirvish – LVM in LUKS

If you haven’t heard of Dirvish, it’s about time. It is a very neat, rsync and hardlink based backup solution, just like a pro version of Apple’s TimeMachine.

What I want

My setup is quite specific for my use case: Having used dirvish at work for a while I was still stuck with backup-manager on my own servers, which is quite neat, but it leaves you with differential tarballs of your system which makes it an annoying task finding a specific version of one file somewhere in the past.

Dirvish, running on the backup server, logs into the target system using ssh and syncs the whole filesystem (you can define excludes, of course) to a folder with a timestamp in its name on the backup server. The next time it runs (simple cronjob) it will create a new timestamped folder, sync only changed files and hardlink everything else.
So, in the end we get a kind of snapshot of each time dirvish ran, where we can simply copy files back or even roll back the complete system.

When setting up my new dirvish backup server (replacing the ancient tarball and ftp solution) I wanted it to save data incrementally and store it on en encrypted device. For easiness of use I simply enter the encryption password manually after booting. Therefore only the dirvish partition (“bank”) is on an encrypted device, the rest of the machine is plain.
Caveat: The ssh key for accessing the other machines is on the unencrypted partition, this needs some rework to be done …

System: LVM on LUKS on VD in VM on LVM. For sure!

My Backup server runs in a VM on one of my Xen systems. Actually, the whole dirvish-bank (i.e. the location of all backups) is also synced to another machine at home for real backup.

The VM has got two virtual disks: the system disk, xvda, and (for historic reason) xvdc as storage for dirvish. This disk is completely encrypted using LUKS. I won’t cover encryption in detail here.

Inside this encrypted container I will create a LVM volume group, and backup will be done on volumes inside this group. The reason for this nested LVMs (the virtual disks already live in a LVM on the host) is that I will be able to create different “tiers” of backups if possible: If space on the underlying host gets tight, I will be able to move lower priority hosts on a second volume and don’t risk critical machines not being backuped.

So we basically have:

Encrypting

I did this on a freshly installed Debian Jessie, though it will quite surely work similarly on Ubuntu, SuSE or RedHatish systems. So first install cryptsetup:

Next, let’s encrypt our whole data disk:

You need to answer some questions, especially for a password. Use a good one, but remember that you will have to type it each time you want to mount the volume. We better create a backup of the LUKS-Header and set up a second, even more complex password. Store both at a very save place:

We now have a completely encrypted partition and will make it available through deivcemapper under /dev/mapper/lukslvm:

Let’s create a lvm within our container, the virtual disk hat 210GB:

Finally we create our mount point and an entry in /etc/fstab.

We use noauto to prevent the system stalling on boot because the encrypted container isn’t open already. We also use noatime and dirnoatime to speed up rsync comparison.

Preparing dirvish

Now let’s get the backup running.

First you need to create a key pair for ssh login to the target machines. I won’t cover this here because there are trillions of pages about this. I called the keys id_rsa.dirvish and id_rsa.dirvish.pub and put them in /root/.ssh/ and copied the public key to all target machines (ansible is a very good friend!) Do yourself a favour and test if the login works.

Now install dirvish:

Now go to /etc/dirvish and edit the master.conf file.

We here define the bank (like “place where you store precious things”), which of course is our volume in the LUKS vault.

We also have some common excludes and a default expiry.

Debian provides a cronjob with some automatism, we won’t use that but rather create our own cronjob later in this process.

Backing up

Now let’s create our first backup. We will create a backup of the dirvish machine itself. Won’t help against failing disks, but against failing admins breaking stuff.

For each target machine we will create a folder hierarchy, the so called vault, which looks like this:

HOST will of course be the hostname of the target machine. Calling it *-root is just convention for backups of the root-tree. You could call it e.g. HOST-mailspool if you only backup the mail dir of a host.

Which means:

  • client: Hostname (needs to be resolvable!) of the target machine.
  • tree: Folder to backup.
  • xdev: 1 means that it will stay within the filesystem. Take care if you have /var or /home in different volumes (which I tend to have).
  • index: Type of compression of the index file (file list)
  • log: Type of compression of the log file saved within the folder.
  • image-default: Timestamp of the backup folder
  • exclude: Folders to be excluded.

There are a lot more config items, some of them which I use quite often:

  • pre-server: Path to a script on the dirvish server to be run before backup starts.
  • post-server: Path to a script on the dirvish server to be run after backup starts.
  • pre-client: Path to a script on the target machine to be run before backup starts. This is helpful for dumping sql databases before backing up.
  • post-client: Path to a script on the target machine to be run after backup starts.
  • speed-limit: Maximum transfer speed in Mbit/s

When we are finished creating our vault, we need to initialise it. This creates the first complete sync.

Depending on the size of the machine this can take quite a while. When done our vault will look like this:

tree now contains the folder hierarchy synced from the target, while the other files contain meta information.

The last step is creating a cronjob now. Simply add a line to /etc/crontab for each vault. Make sure to use different running times:

Ok, there’s one more step: Create any kind of monitoring facility, e.g. use post-server to send the summary and the log by mail, or parse these files and react on the results or use your existing monitoring solution…