Published on 2020-05-31 by Kenneth Flak
Back to Tech Research
Ever since I abandoned iLife I have been missing the now discontinued TimeCapsule: a solid, local, incremental backup solution that doesn't require me to manually attach a USB hard drive to my computer and run a script. I use the excellent rsync.net for cloud backup, but I consider this more of a last resort: the thought of downloading 300 GB of data from a remote server to restore my system does not inspire enthusiasm.
My first thought was to connect a backup hard drive to the internet router's USB port. Many friendly yet fruitless phone calls later I was told by my internet provider that I would need a real IT person to set this up. And that they used to have one at some point in time.
A plausible solution arose when I got my hands on a Raspberry Pi 4 and discovered the excellent rsnapshot script.
This uses rsync
to create periodic snapshots of a system, using hard links wherever possible.
Perfect for my needs, in other words.
Setting it up for automated backup on the Raspberry Pi does require a bit of tinkering with configuration files and systemd
services, but luckily all the necessary information is readily available on the Arch Linux wiki.
The final result of my efforts is that rsnapshot
runs as a systemd
service on various intervals, copying over new and modified files from my computer to an external hard drive attached to the Rpi.
On the computer run:
sudo pacman -S rsnapshot
This will pull in rsnapshot
and a couple of dependencies, most importantly rsync
.
Install rsync
on the Rpi:
sudo pacman -S rsync
In order to be able to push the backup from the computer to the Rpi we need to set up an NFS share. This is a very useful protocol for sharing drives and directories on the local network.
Install nfs-utils
on both computer and Rpi:
sudo pacman -S nfs-utils
On the Rpi create the share directory. This is where we will mount the backup harddrive:
sudo mkdir /srv/nfs/backup
Change permissions recursively to enable everybody on the local network full access. Only recommended if you fully trust the network!
sudo chmod -R 777 /srv/nfs/backup
Edit /etc/exports
on the Rpi:
sudoedit /etc/exports
Add this to the end of the file:
/srv/nfs/backup 192.168.1.0/24(rw,sync,no_root_squash)
Again: this will allow anyone on the network to access the hard drive! You could also specify explicitly the computer you want to grant access to:
/srv/nfs/backup 192.168.1.186(rw,sync,no_root_squash)
Start the server and enable it on reboot:
sudo systemctl enable --now nfs-server.service
Go back to your computer, create the relevant directory and mount the shared drive:
sudo mkdir /run/media/backup
sudo mount <ip_of_rpi>:/srv/nfs/backup /run/media/backup
To make the mount persistent across reboots, add this line to the computer's /etc/fstab
:
<ip_of_rpi>:/srv/nfs/backup /run/media/backup auto,nofail 0 0
Now everything is set up and ready. The last thing we need to do is to mount the actual hard drive. On the Rpi run:
sudo blkid
The external hard drive will be called something like /dev/sda1
.
Make a note of the hard drive's UUID.
This is a unique identifier that you will need when making the mount persistent.
Mount the disk (replace sda1
with your disk identifier):
sudo mount /dev/sda1 /srv/nfs/backup
To make this persistent across reboots put this line in the /etc/fstab
file on your Rpi:
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /run/media/backup ext4 noauto,nofail,noexec,nouser,rw,async 0 2
Replace all the x-es with the UUID you discovered earlier.
This is where things get interesting. On the computer back up the default rsnapshot
configuration file:
sudo cp /etc/rsnapshot.conf /etc/rsnapshot.conf.default
Next, open up the file for editing:
sudoedit /etc/rsnapshot.conf
The configuration file is packed with useful information. Here I have extracted the lines that need to be uncommented in order for the setup to work:
snapshot_root /run/media/backup/
# This is a removable disk, so we don't want to accidentally create a new snapshot directory
no_create_root 1
# Arch Linux-specific binary paths
cmd_cp /usr/bin/cp
cmd_rm /usr/bin/rm
cmd_rsync /usr/bin/rsync
cmd_ssh /usr/bin/ssh
cmd_logger /usr/bin/logger
cmd_du /usr/bin/du
cmd_rsnapshot_diff /usr/bin/rsnapshot-diff
# Amount of backups to keep
retain hourly 24
retain daily 7
retain weekly 4
# how much chatter do we want
verbose 2
loglevel 3
# location of lockfile
lockfile /var/run/rsnapshot.pid
# Directories to ignore:
exclude "/home/<user>/.cache"
exclude "/home/<user>/.stack"
exclude "/home/<user>/.tmp"
exclude "/home/<user>/.thumbnails"
exclude "/home/<user>/.cargo"
exclude "/home/<user>/build"
exclude "/home/<user>/node_modules"
exclude "/home/<user>/Downloads"
# paths to back up -> path in the snapshot directory
# note that the next lines are separated with TABS, not SPACES
backup /home/<user>/ localhost/
backup /etc/ localhost/
backup /usr/local/ localhost/
Test the configuration by running:
rsnapshot -t hourly
The next thing we need to do is to set up the systemd
service that will run the backup.
sudoedit /etc/systemd/system/rsnapshot@.service
The content of this file is:
[Unit]
Description=rsnapshot [%I] backup
Requires=run-media-backup.mount
After=run-media-backup.mount
[Service]
Type=oneshot
Nice=19
IOSchedulingClass=idle
ExecStart=/usr/bin/rsnapshot %I
One thing to note is the section:
Requires=run-media-backup.mount
After=run-media-backup.mount
This guarantees that the service will only run if the removable hard drive is attached and mounted.
We want to run the backup on hourly, daily and weekly intervals. To achieve this we need to create these files:
sudoedit /etc/systemd/system/rsnapshot-hourly.timer
sudoedit /etc/systemd/system/rsnapshot-daily.timer
sudoedit /etc/systemd/system/rsnapshot-weekly.timer
The files look like this:
/etc/systemd/system/rsnapshot-hourly.timer
[Unit]
Description=rsnapshot hourly backup
[Timer]
# run hourly
OnCalendar=*-*-* *:00:00
Persistent=true
Unit=rsnapshot@hourly.service
[Install]
WantedBy=timers.target
/etc/systemd/system/rsnapshot-daily.timer
[Unit]
Description=rsnapshot daily backup
[Timer]
# run daily
OnCalendar=09:30
Persistent=true
Unit=rsnapshot@daily.service
[Install]
WantedBy=timers.target
/etc/systemd/system/rsnapshot-weekly.timer
[Unit]
Description=rsnapshot weekly backup
[Timer]
# run hourly
OnCalendar=Monday *-*-* 10:30:00
Persistent=true
Unit=rsnapshot@weekly.service
[Install]
WantedBy=timers.target
The very last thing we want to do before enabling the timers is to make sure the service works as advertised:
sudo systemctl start rsnapshot@hourly.service
This will probably take a long time on the first run, depending on how much data you have and the speed of the network and hard drive. After you are satisfied everything works, enable the timers:
sudo systemctl enable --now rsnapshot-hourly.timer
sudo systemctl enable --now rsnapshot-daily.timer
sudo systemctl enable --now rsnapshot-weekly.timer
If all the stars are aligned and every link in the chain works you should now have a working backup solution that requires minimal input in your daily life. All your files will now be available as a regular file system on the external hard drive.
The first time I ran this setup I started the backup in the evening and woke up to find a valiantly working laptop and Rpi, all lights ablink and fans ablowing. The combined effort of 10 hours of work was a glorious 14 GB of data out of the ~300 GB I needed to back up. I can live with a relatively slow performance when the backup happens regularly and in small batches, but for a first run this becomes unbearable. So I unceremoniously unmounted the network drive from laptop and Rpi, stopped the rsnapshot-*.timer
s, killed the NFS server on the Rpi and plugged the USB drive into the laptop instead. The beauty of the nfs
setup is that rsnapshot
doesn't care if the drive is mounted via NFS or physically. So for the initial backup run I did this on my computer:
sudo mount /dev/sdb1 /run/media/backup
sudo rsnapshot -D hourly
The -D
flag ensures a running commentary on which files get copied over. And it is so, so, so much faster than the network version.
If you have any feedback or comments, please get in touch!