I’m currently preparing my setup for writing my PhD thesis. And it’s taking quite some time to get everything ready.
Today, I set up a backup strategy which should keep my precious writings save against any cataclysmic disaster. Here it is.
I write in a local folder on my SSD.
- The SSD is backed up via Time Machine to my server (onto a RAID 5 device), when I’m in my home LAN.
- The SSD is backed up via Time Machine to an external HDD (just a simple USB 3), when connected.
- The Backblaze cloud backup is running continuously in the background, backing up my whole SSD to the cloud.
- The local folder is actually a Git repository, which allows me to track changes and, potentially, write on different topics from different branches.
- The local Git repository has a remote branch on our institute’s server, which is backed up daily.
- The local Git repo has a second remote destination: Bitbucket. By making use of this small hack, I can push to both remote destinations with just one single shell command, the usual git push origin master.
- A few times a day (every two hours), a small backup shell script is running (see below).
- Via rsync it copies the thesis folder content to my local Dropbox folder, initiating an upload to the Dropbox cloud.
- Via gsync it copies the folder to my Google Drive. Here, I skipped the
.git/folder, as gsync is incredible slow for initiating connections for files. Also, I had to apply this fix in gsync’s code.
This, for sure is overkill. Who needs 7 direct backups anyway?! But I was interested in how much redundancy I can achieve with as little inter-work-flow disturbance as possible. I’m quite satisfied.
My (current) backup.sh file: