Adrian Lita

How to setup a nice and simple backup script on Ubuntu

by Adrian Lita - on 2018-10-16

Keywords: #ubuntu18 #backup #bash #cron #mysql #apache2 #files
Permalink: https://adrian.lita.me/blog_how-to-setup-nice-and-simple-backup-script-on-ubuntu.html
 

These days, while setting up a new server, in the evening, while being a bit tired I mananged to destroy 2 days' work, simply because I had no backup solution installed. Simply not wanting to pay more on a backup solution, since I have a VPS and it's very handy and cheap, I've decided to build my own backup solution, simple, effective and free :)

The only things I needed to backup were actual files and some of the MySQL databases. To be able to follow my tutorial 100%, you will need the following:

  • an Apache web server (preferably with HTTPS) on the server you want to back up
  • root access in order to setup cron jobs on both servers
  • preferably bash shell on both machines, as the scripts are written mostly for bash
  • one MySQL account that is able to access the databases needed for backup (it is not recommended to use root account for that)

Setting up the first server (the one we need to back up)

Let's start with the server containing files that you want to back up. We'll call this the first server. Choose a folder of your choice to save the scripts and credentials. Personally, since saving plain-text passwords for the MySQL database backup account, I prefer to put everything to /root directory and make sure that it is not readable by anyone else. Then start by defining the credentials for the MySQL database. For this, create a file named mysql.cnf:

sudo touch /root/mysql.cnf
sudo chmod 600 /root/mysql.cnf
sudo nano /root/mysql.cnf

Now write the content of the file, which is the following:

[client]
host = localhost
user = your_username
password = your_password

Hit Ctrl+o to save, then choose Yes, and then hit Ctrl+x to quit nano and finish editing the file. Now we need to create and write the script that will actually do the backup:

sudo touch /root/backup.sh
sudo chmod 700 /root/backup.sh
sudo nano /root/backup.sh

The lines above creates the file backup.sh located in /root and makes sure it's executable by root. Next we need to write the actual code that will reside in the file:

#!/bin/bash

#full path to the mysql.cnf file created earlier
mysqlcnf="$(pwd)/mysql.cnf"

#here just write a list of all the databases you want to save
databases=(
        "database1"
        "database2"
)

#here just write a list of all the folders you want to save
#please note that 2 folders must not have the same ending name
#in this example I backup the web files and apache's settings on Ubuntu 18
folders=(
        "/var/www/"
        "/etc/apache2/"
)

#here write the destination folder of the backup file, preferably on the web server
#the script makes sure to delete the previous backup
backupsave="/var/www/html/ssl/backup"

printf "Backup started on $(date)\n"

printf "Removing older backup files ... "
rm $backupsave/backup.tar
printf "OK\n"

#we're making the archive reside in /tmp while creating it, and at the end we delete it
mkdir /tmp/backup
cd /tmp/backup
mkdir sql
for i in ${databases[@]}
do
    printf "Dumping SQL $i ... "
    mysqldumpcmd="mysqldump --defaults-extra-file=$mysqlcnf $i"
    $mysqldumpcmd > sql/$i.sql
    printf "OK\n"
done

printf "Archiving SQL ... "
tar -zcf sql.tar.gz sql
printf "OK\n"

printf "Removing SQL files ... "
rm -rf sql
printf "OK\n"

fldr=""
for i in ${folders[@]}
do
        cd $i
        cd ..
        bn=$(basename $i)
        fldr="$fldr $bn.tar.gz"
        printf "Archiving $bn ... "
        tar -zcf /tmp/backup/$bn.tar.gz $bn
        printf "OK\n"
done

cd /tmp
printf "TAR-ing everything archives ... "
tar -cf backup.tar backup
printf "OK\n"

printf "Removing unused files ... "
rm -rf /tmp/backup
printf "OK\n"

printf "Moving backup file to $backupsave ... "
mv backup.tar $backupsave
printf "OK\n"

printf "Changing owner to www-data ... "
chown www-data:www-data $backupsave/backup.tar
printf "OK\n"

printf "Backup finished on $(date)\n"

Save it, and it's good to go. You can test it out by typing ./backup.sh. When ran, it will backup all the files and in the end it produces an output similar to this:

Backup started on Tue Oct 16 00:00:01 CEST 2018
Removing older backup files ... OK
Dumping SQL database1 ... OK
Dumping SQL database2 ... OK
Archiving SQL ... OK
Removing SQL files ... OK
Archiving www ... OK
Archiving apache2 ... OK
TAR-ing everything archives ... OK
Removing unused files ... OK
Moving backup file to /var/www/html/ssl/backup ... OK
Changing owner to www-data ... OK
Backup finished on Tue Oct 16 00:00:03 CEST 2018

As you'll see next, the output can be logged to a file, to be reviewed at later times. Now all we need to on this server is to setup the cron job to periodically run this script. To do this, type sudo crontab -e. This will open up nano or another editor file in which you can edit the cron jobs:

[everything above]
# m h  dom mon dow   command
0 0 * * * /root/backup.sh >> /root/backup.log 2>&1

In the end of the file (assuming you run Ubuntu 18 and you saved your files to /root) you need to insert the following line: 0 0 * * * /root/backup.sh >> /root/backup.log 2>&1. Save the file, and you're all set. This line inserts a cron job that will be executed everyday at midnight. The job is to run /root/backup.sh, and redirecting its output to /root/backup.log. That's where the log will go. Note that we're using >> instead of > to redirect the output to a file. The difference between them is that the first one appends the file with new data, while the single arrow overwrites the file.

Now you have a daily ran script which backs up your file. Next thing to do is to setup a .htaccess file to protect the folder where you're saving it. Note: though having a password-protected folder, in order to have the minimum ammount of security, you must use HTTPS when transfering the file. If this example isn't fit for you in terms of security, you can always choose to use other tranfer mechanism, such as SFTP. Ok, moving on, let's set a password-protected apache folder. First, go somewhere outside your web-servable folders, and setup a .htpasswd file. In this example, we'll go to /var/www/settings

cd /var/www/settings
htpasswd -c backup.htpasswd backup

It will prompt you to type in a password, and then retype it for confirmation. The second line creates a file named backup.htpasswd, and adds to it the user backup. Let's assume that you entered the password backup123. Now you can preview the backup.htpasswd file, it should look something like this (the password will be different though):

cat backup.htpasswd 
backup:$apr1$hkeYzkhp$N/c6LoISxokNiZvTZbVOe/

All you need to to is setup the .htaccess file to protect your web folder now. Go to your folder, and create .htaccess file:

cd /var/www/html/ssl/backup
nano .htaccess

Inside, enter the following code:

AuthType Basic
AuthName "Password Protected Area"
AuthUserFile /var/www/settings/backup.htpasswd
Require valid-user

Save the file, and you should be all set and ready on this server. You can test everything again, by running the /root/backup.sh script, and then pointing your browser to https://yoursite.com/backup, where you will need to login with username backup and password backup123. In that folder, you should see a file named backup.tar. If something isn't right, please check folder names and file names.

Setting up the second server (the backup server)

Now, since we're done with the first server, we'll be moving on to the backup server, the one responsible for safely retrieving and storing backup files. Here, we'll setup another script which will grab the file from the first server, and store it in multiple locations, on multiple disks. For this, we assume that the second server has 3 sepparate disks:

  • The first disk is the disk where the system resides. This is mapped to /, and is the disk where our script and cron job will reside.
  • The second disk is an SSD, mapped to /opt/fastssd. It will only be used for backups.
  • The third disk is a hard-drive used for backups as well. It is mounted to /opt/backup.

First we need to create the backup script. As a prerequisite, you need curl to be installed on your system. Create the retrieve_backup.sh file under /root:

sudo nano touch /root/retrieve_script.sh
sudo chmod 700 /root/retrieve_script.sh
sudo nano /root/retrieve_script.sh

Write the following code inside, and then save it.

#!/bin/bash

cd /tmp
link="https://example.com/backup/backup.tar"
user="backup"
password="backup123"

wg="curl -o backup.tar -u $user:$password $link"
$wg

cDate=`date +%Y-%m-%d`

cp backup.tar /opt/fastssd/example_$cDate.tar
cp backup.tar /opt/backup/example_$cDate.tar
rm backup.tar

It is a very simple script, which gets the file via HTTPS from your first server, making use of the username and password of the protected folder. Afterwards, it just copies it in different locations, changing its name to example_ and current date. So saved files should look like example_2018-10-16.tar. All we need to setup now is the cron job which runs this daily. We can do this by entering sudo crontab -e command. Here you need to append the following text:

[everything above]
# m h  dom mon dow   command
0 2 * * * /root/retrieve_script.sh >> /root/retrieve_script.log 2>&1

This script is set to ran daily at 02:00 AM. I've put 02:00 AM there because I have a 1 hour timezone difference between my servers, and because I estimate that the backup script on the first server won't take more than 1 hour to generate all the files.

Now you're all set! Both servers run the daily backup scripts, and your first server is backed up. The second server isn't but it is saving the files on 2 different drives. This way, if one drive has a physical failure, you can still get the data from the second drive.

Setting up the multiple servers

What makes this script so powerful is that you can actually set it up on multiple servers, and use only one server to back up all of them. For this, on other servers you just need to follow the steps from the first servers, and on the backup server just duplicate your retrieve_backup.sh file, modify the example_ extension and setup multiple cron jobs for it, and basically you're all set!