How to create a secure incremental offsite backup in Linux with Duplicity

If you maintain mission-critical data on your server, you probably want to back them up on a remote site for disaster recovery. For any type of offsite backup, you need to consider encryption in order to avoid any unauthorized access to the backup. In addition, it is important to use incremental backup, as opposed to full backup, to save time, disk storage and bandwidth costs incurred in ongoing backup activities.

Duplicity is an encrypted incremental backup tool in Linux. Duplicity uses librsync to generate bandwidth/space-efficient incremental archives. It also uses GnuPG to encrypt and/or sign backup archives to prevent unauthorized data access and tampering.

In this tutorial, I will describe how to create a secure incremental offsite backup in Linux with Duplicity.

Install Duplicity on Linux

To install Duplicity on Debian, Ubuntu or Mint:

$ sudo apt-get install duplicity python-paramiko

To install Duplicity on CentOS or RHEL, first enable EPEL repository, and run:

$ sudo yum install duplicity python-paramiko

To install Duplicity on Fedora:

$ sudo yum install duplicity python-paramiko

Create a Secure Incremental Remote Backup via SCP

To create a secure and incremental backup of a local folder (e.g., ~/Downloads), and transfer it to a remote SSH server via SCP, use the following command. Note that before proceeding, you must enable password-less SSH login to the remote SSH server first.

$ duplicity ~/Downloads scp://user@remote_site.com//home/user/backup/
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
GnuPG passphrase:
Retype passphrase to confirm:
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1375918500.17 (Wed Aug  7 19:35:00 2013)
EndTime 1375918539.07 (Wed Aug  7 19:35:39 2013)
ElapsedTime 38.90 (38.90 seconds)
SourceFiles 3
SourceFileSize 65982804 (62.9 MB)
NewFiles 3
NewFileSize 65982804 (62.9 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 3
RawDeltaSize 65978708 (62.9 MB)
TotalDestinationSizeChange 66132356 (63.1 MB)
Errors 0
-------------------------------------------------

When you create a remote backup of given data for the first time, Duplicity will create a full backup, and ask you to set an initial GnuPG passphrase for encryption. Subsequent runs of Duplicity will create incremental backups, and you need to provide the same GnuPG passphrase created during the first run.

Create a Secure Incremental Remote Backup in Non-interactive Mode

If you do not want to be prompted to enter a passphrase, you can set PASSPHRASE environment variable, prior to running Duplicity as follows.

$ PASSPHRASE=mypass duplicity ~/Downloads scp://user@remote_site.com//home/user/backup/

If you do not want to pass a plain-text passphrase in the command-line, you can create the following backup script. To be more secure, make the script readable to you only.

export PASSPHRASE=yourpass
duplicity ~/Downloads scp://user@remote_site.com//home/user/backup/
unset PASSPHRASE

Create an Incremental Remote Backup Without Encryption

If you do not need a secure backup, you can disable encryption as follows.

$ duplicity --no-encryption ~/Downloads scp://user@remote_site.com//home/user/backup/

Verify the Integrity of a Remote Backup

For critical data, it is probably a good idea to verify that a remote backup was successful. You can check whether or not the local and remote volumes are in sync, by using the following command.

$ duplicity verify scp://user@remote_site.com//home/user/backup/ ~/Downloads
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Wed Aug  7 19:34:52 2013
Verify complete: 8 files compared, 0 differences found.

Note that when using "verify" option, you need to reverse the order of local and remote folders; specify the remote folder first.

Restore a Remote Backup

In order to restore a remote backup locally, run the following commnad:

$ duplicity scp://user@remote_site.com//home/user/backup/ ~/Downloads_restored

To successfully restore a remote backup, the specified restore destination directory (e.g., Downloads_restored) must not exist locally beforehand.

Create a Secure Incremental Remote Backup via FTP

Besides SCP, Duplicity also supports several other protocols including FTP.

To use FTP in Duplicity, use the following format:

$ duplicity ~/Downloads ftp://username@ftp_server.com/backup_directory

For non-interactive runs, specify the FTP password in FTP_PASSWORD environment variable:

$ FTP_PASSWORD=mypass duplicity ~Downloads ftp://username@ftp_server.com/backup_directory

Duplicity Troubleshooting Tips

If you encounter the following error, this means that you did not install SSH2 protocol library for python.

BackendException: Could not initialize backend: No module named paramiko

To fix this error, install the following.

On Ubuntu, Debian or Mint:

$ sudo apt-get install python-paramiko

On Fedora, CentOS or RHEL:

$ sudo yum install python-paramiko

If you encounter the following error, it is because you did not set up password-less ssh login to a remote backup server. Make sure that you do, and retry.

BackendException: ssh connection to xxx@xxxx failed: No authentication methods available

Subscribe to Xmodulo

Do you want to receive Linux FAQs, detailed tutorials and tips published at Xmodulo? Enter your email address below, and we will deliver our Linux posts straight to your email box, for free. Delivery powered by Google Feedburner.

The following two tabs change content below.
Dan Nanni is the founder and also a regular contributor of Xmodulo.com. He is a Linux/FOSS enthusiast who loves to get his hands dirty with his Linux box. He likes to procrastinate when he is supposed to be busy and productive. When he is otherwise free, he likes to watch movies and shop for the coolest gadgets.
Your name can also be listed here. Write for us as a freelancer.

9 thoughts on “How to create a secure incremental offsite backup in Linux with Duplicity

  1. Hi,
    Thanks for the article.
    Any ideas why it is not working for me? (see below)

    Regards

    PASSPHRASE=mypass duplicity ~/tmp file://$G_DUPLICITY_DIR/
    Local and Remote metadata are synchronized, no sync needed.
    Last full backup date: none
    No signatures found, switching to full backup.
    GPGError: GPG Failed, see log below:
    ===== Begin GnuPG log =====
    gpg: problem with the agent: Bad CA certificate
    ===== End GnuPG log =====

    ls -la ~/.ssh
    -rw------- 1 user user 771 Aug 9 15:52 id_dsa
    -rw-r--r-- 1 user user 600 Aug 9 15:52 id_dsa.pub

    gpg --list-secret-keys
    /home/user/.gnupg/secring.gpg
    -----------------------------
    sec 2013-03-30
    uid ...................
    ssb 2013-03-30

    Killing gnome-kering-daemon has no effect. Deleting ~/.gpg and recreating it ( gpg --gen-key) has no effect either.

    • Can you provide your information: (1) distro version (2) duplicity version (3) gpg version

      I did not have any issue on Fedora 18 and Ubuntu 12.10.

      • Linux tower 3.10.0-sabayon #1 SMP Thu Aug 1 20:14:04 UTC 2013 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ AuthenticAMD GNU/Linux

        duplicity 0.6.21

        gpg (GnuPG) 2.0.20
        libgcrypt 1.5.3

  2. sudo is not configured by default in Debian so the commands you use to install on Debian will fail on most Debain systems. I do not use sudo at all!

    • Sure, you can certainly do that. Instead of scp://...., specify file://[usb_mount_point] (e.g., file:///media/external) as a target. It will be slow if you are not using USB 3.0.

  3. Hi, I use duplicity for small backups, but the last week I try to use with a 40Gb directory, to send the backup over the internet (both servers are Debian 7), but the "duplicity" process died without any advice.
    Both servers have 300 o 400 Gb to manage the backups or temp files.

    Do you know if duplicity has a "limit" or something?

    thanks!

  4. Dear Admins,

    I am trying to find a way to dump the mysql DB and copy to the same server. But my question here is : Can we first dump the mysql, copy the same to as an incremental backup to the already dumped sql DB? Right after the dump/copy, I want to truncate one table which is seen growing drastically.

    NOTE: I want a script which will work well on Rsyslog and Loganalyzer since I have seen the DB is growing very fast.

Leave a comment

Your email address will not be published. Required fields are marked *