LVM-based backup and replication

In essence, the Linux Logical Volume Management (LVM)-based Backup and Replication uses shell scripts to find out the logical volume and volume group where the repository storage folder resides and then creates a filesystem snapshot. Once the snapshot is created, the repository is available for reads and writes while the maintenance operation is still in-progress. When it finishes, the snapshot is removed and the changes are merged back to the filesystem.

What’s in this document?


  • Linux OS;

  • The system property (JVM’s -D) named lvm-scripts should point to the folder with the above scripts;

  • The folder you are about to backup or use for replication contains a file named;

  • That folder DOES NOT HAVE a file named lock.

All of the above mean that the repository storage is ‘ready’ for maintenance.

How it works

By default, the LVM-based Backup and Replication feature is disabled.

To enable it:

  1. Get the scripts located in the lvmscripts folder of the distribution.

  2. Place them on each of the workers in a chosen folder.

  3. Set the system property (JVM’s -D) named lvm-scripts, e.g., -Dlvm-scripts=<folder-with-the-scripts>, to point to the folder with the scripts.


    GraphDB checks if the folder contains scripts named:,, and This is done the first time you try to get the repository storage folder contents. For example, when you need to do backup or to perform full-replication.

GraphDB executes the script with a single parameter, which is the pathname of the storage folder from where you want to transfer the data (either to perform backup or to replicate it to another node). While invoking it, GraphDB captures the script standard and error output streams in order to get the logical volume, volume group, and the storage location, relative to the volume’s mount point.

GraphDB also checks the exit code of the script (MUST be 0) and fetches the locations by processing the script output, e.g., it must contain the logical volume (after, lv=), the volume group (vg=), and the relative path (local=) from the mount point of the folder supplied as a script argument.

If the storage folder is not located on a LVM2 managed volume, the script will fail with a different exit code (it relies on the exit code of the lvs command) and the whole operation will revert back to the ‘classical’ way of doing it (same as in the previous versions).

If it succeeds to find the volume group and the logical volume, the script is executed, which then creates a snapshot named after the value of $BACKUP variable (see the script, which also defines where the snapshot will be mounted). When the script is executed, the logical volume and volume groups are passed as environment variables, named LV and VG preset by GraphDB.

If it passes without any errors (script exit code = 0), the node is immediately initialized in order to be available for further operations (reads and writes).

The actual maintenance operation will now use the data from the ‘backup’ volume instead from where it is mounted.

When the data transfer completes (either with an error, canceled or successfully), GraphDB invokes the script, which unmounts the backup volume and removes it. This way, the data changes are merged back with the original volume.

Some further notes

The scripts rely on a root access to do ‘mount’, and also to create and remove snapshot volumes. The SUDO_ASKPASS variable is set to point to the script from the same folder. All commands that need privilege are executed using sudo -A, which invokes the command pointed by the SUDO_ASKPASS variable. The latter simply spits out the required password to its standard output. You have to alter the accordingly.

During the LVM-based maintenance session, GraphDB will create two additional files (zero size) in the scripts folder, named snapshot-lock, indicating that a session is started, and snapshot-created, indicating a successful completion of the script. They are used to avoid other threads or processes interfering with the maintenance operation that has been initiated and is still in progress.