Backing Up and Restoring Data

MemSQL provides BACKUP and RESTORE commands for making and restoring from binary database backups. BACKUP writes a consistent snapshot of the database to disk across the cluster. RESTORE restores the database from this snapshot across all nodes in the cluster. MemSQL’s distributed RESTORE even handles replication, creating master and slave partitions on paired leaf nodes automatically. Unlike mysqldump, the backup files are created on the leaves and not the client.

When a BACKUP operation is executed, each leaf node in the cluster creates a .backup file for each partition. These files are created on the leaves in the destination directory (relative to memsqlbin/data) provided to BACKUP by each leaf. MemSQL doesn’t move the .backup files off the leaf nodes, its up to you to move them to a safe location after the backup and to restore them to the original location before the restore.

When the backup completes or fails, a corresponding row will be inserted into the Backup History Table. That row will be written to the sentinel file (labeled as BACKUP_COMPLETE or BACKUP_INCOMPLETE) as JSON and it will also be the result set for the backup. The sentinel file is written to the relative /data/<backup_folder> on the master aggregator node.

Backup History Table

MemSQL keeps track of metadata for all backups in the mv_backup_history table in information_schema. This table provides important metadata on recent successful backups.

The mv_backup_history table can be cleared by running CLEAR BACKUP_HISTORY.

Table Columns

Column Description
backup_id A unique identifier for the backup.
cluster_name Name of the cluster that carried out the backup.
cluster_id Unique id that can identify a cluster (see show cluster id).
database_name Name of the database that was backed up.
start_timestamp Timestamp at which the backup started.
end_timestamp Timestamp at which the backup completed.
num_partitions Number of partitions for the backup.
backup_path Path to where the backup file was placed after backup completed.
checksum Concatenated CRC32C checksum of all backup files, see Verifying Backup Files.
status Success or Failure.
size Size of all backup files, in megabytes written.


BACKUP DATABASE users TO 'users_backup_5';
| Backup_Id | Cluster_Name        | Cluster_Id           | Database_Name | Start_Timestamp     | End_Timestamp       | Num_Partitions | Backup_Path                 | Checksum                                                 | Status  | Size |
|         8 | Memsql_Test_cluster | 16559108944112188990 | users         | 2018-10-17 17:09:46 | 2018-10-17 17:09:46 |              6 | users_backup_5/users.backup | 91b3d3353709baa1eff3ba5cfba6bac939b318f41652eac49ad8a644 | Success | 5612 |
1 row in set (0.26 sec)
SELECT * FROM information_schema.MV_BACKUP_HISTORY ORDER BY Backup_Id;
| BACKUP_ID | CLUSTER_NAME        | CLUSTER_ID           | DATABASE_NAME | START_TIMESTAMP     | END_TIMESTAMP       | NUM_PARTITIONS | BACKUP_PATH                 | CHECKSUM                                                 | STATUS  | SIZE |
|         1 | Memsql_Test_cluster | 16559108944112188990 | users         | 2018-10-17 17:04:52 | 2018-10-17 17:04:52 |              6 | user_backup_0/users.backup  | 13343332b58e5aa6187e4fb079215ace1e3da1bb31dc538b185f4643 | Success | 5612 |
|         2 | Memsql_Test_cluster | 16559108944112188990 | users         | 2018-10-17 17:04:52 | 2018-10-17 17:04:52 |              6 | user_backup_1/users.backup  | 3a7665606b41f91863b8b54650630c9c377ff7e9189e05d9311d1011 | Success | 5612 |
|         3 | Memsql_Test_cluster | 16559108944112188990 | users         | 2018-10-17 17:04:52 | 2018-10-17 17:04:52 |              6 | user_backup_2/users.backup  | e4b9c6de4203af4a4afae3148eacaf224cb90d1f6358ff2fefd2b3af | Success | 5612 |
|         4 | Memsql_Test_cluster | 16559108944112188990 | users         | 2018-10-17 17:04:52 | 2018-10-17 17:04:52 |              6 | user_backup_3/users.backup  | cdfb908c39c555bc943540aaa7eef97065fb5b4d4a1aa97dc690e5fd | Success | 5612 |
|         5 | Memsql_Test_cluster | 16559108944112188990 | users         | 2018-10-17 17:04:52 | 2018-10-17 17:04:53 |              6 | user_backup_4/users.backup  | b63d6a7a108703eebd7716f8dc2803866b37b45044d64660bd561f0b | Success | 5612 |
|         6 | Memsql_Test_cluster | 16559108944112188990 | users         | 2018-10-17 17:09:18 | 2018-10-17 17:09:19 |              6 | ./users.backup              | 9f7f3c281e4becf3c6b1ec0ef56a55d44275e2026d94103294144959 | Success | 5612 |
|         7 | Memsql_Test_cluster | 16559108944112188990 | users         | 2018-10-17 17:09:24 | 2018-10-17 17:09:24 |             -1 | ./users.backup              |                                                          | Failure |    0 |
|         8 | Memsql_Test_cluster | 16559108944112188990 | users         | 2018-10-17 17:09:46 | 2018-10-17 17:09:46 |              6 | users_backup_5/users.backup | 91b3d3353709baa1eff3ba5cfba6bac939b318f41652eac49ad8a644 | Success | 5612 |
8 rows in set (0.01 sec)
SELECT * FROM information_schema.mv_backup_history;
Empty set (0.01 sec)

Backup and Restore on NFS (Network File System)

MemSQL makes BACKUP and RESTORE easy to use with a Network File System (NFS) by naming the .backup files so that that no two leaves will attempt to write to the same file even if they are all writing to the same NFS directory.

To back up MemSQL with NFS:

  • Ensure that the same NFS share is mounted in the same directory on all leaves (e.g. /mnt/backup_nfs/
  • When running BACKUP, simply select a backup_path that points to a directory on that NFS share (e.g. BACKUP DATABASE memsql_demo to '/mnt/backup_nfs/backup_2016_05_04/')

Backup and Restore without NFS

If your cluster is not on NFS and you are restoring a cluster with a different configuration or on different hardware than the original, you must manually distribute partitions across the cluster before running RESTORE. If the cluster has redundancy level greater than one, you must group leaf nodes into replication pairs. Paired nodes must have the same set of partition backups in their local backup directories (specified by backup_path). MemSQL will automatically choose master and slave partitions when you run RESTORE on the master aggregator.

Backup and Restore on S3 and Azure

Backup and restore can also specify an S3 bucket or Azure container. In this case, all the backup files across all the leaves (one per partition as described above) are put directly into the S3 bucket or Azure container. When restore is called, MemSQL will distribute the backup files appropriately and restore the data (similar to how restore from an NFS drive works).

Verifying Backup Files

With each backup, there is a checksum associated with the files that comprise the backup. Each checksum is a long string of hex characters, with each eight characters representing a subsection of the backup. To create this checksum, MemSQL utilizes CRC32C, a common variant of CRC32, to process the backup files.

Our implementation of CRC32C has been verified with the commonly used package crcmod, but any library that implements or uses CRC32C can be utilized.

To illustrate how a checksum is comprised of the discrete parts of a database backup, consider the following example where all backup files for the cluster are stored into a NFS share named backup_database:

| backup_database
  | db.backup
  | db_0.backup
  | db_0.backup_columns0.tar
  | db_0.backup_columns1.tar
  | db_1.backup
  | db_2.backup
  | db_2.backup_columns0.tar 
  | db_2.backup_columns1.tar
  | db_2.backup_columns2.tar
  | db_3.backup
  | db_3.backup_columns0.tar

This is a backup is for a small, mixed columnstore/rowstore to an NFS drive with four partitions. The checksum for this backup is 91b3d3353709baa1eff3ba5cfba6bac939b318f41652eac4 and is found in the BACKUP_COMPLETE sentinel file that was created on the master aggregator node in its relative /data/db_backup/ MemSQL folder.

The first 8 characters are the CRC32C of the reference database, db.backup, on the master aggregator.

Next, each partition in order, either has 8 or 16 characters, depending on whether it has columnstore segment tar files or not. In the latter case, the first 8 characters are reserved for the partitions snapshot, db_0.backup while the second 8 characters are for the segment tar files.

If segment tar files have been created, each segment tar file is concatenated, and then the CRC32C checksum is taken. This is equivalent to calculating the CRC32C of db_0.backup_columns0.tar and db_0.backup_columns1.tar concatenated together or calculating the CRC32C of the first tar file, and then the second file, and so on without finalizing the checksum.

Note Even if a database has a columnstore table, there may not be a corresponding tar file for a partition, because data may be cached in the hidden rowstore table for that columnstore table. In this case, the data would be contained inside the rowstore snapshot.

The following Python script can be used to verify the backup checksum. Copy and paste this script into a new file named Usage instructions are covered in the code comments for the script. After running the script, run echo $?. A result of 0 means the verification was successful, while a result of -1 means it was not. The script is well commented and includes debugging output if there was an error in the validation process.


Script completion time is dependant on the size of the backup and may take hours to complete.

# Given a directory, this script will verify that the backup crc and size both 
# are unchanged.
# REQUIRES: crcmod to be installed: 
# USAGE: python /absolute/path/to/backup
# NOTE: This script needs read privileges on all files being verified. 
import crcmod
import glob 
import json
import sys 
import errno

# VerifyBackup:
#   Verifies the CRC located in the backup sentinel file (BACKUP_COMPLETE) 
#   matches the calculated CRC of files in backupDirectory.
# Param backupDirectory: absolute path to directory where backup exists.
# Return: 0 on success, -1 on failure.
def verifyBackup(backupDirectory):
    # Strip off trailing '/' if exists.
    if backupDirectory[-1] == '/':
        backupDirectory = backupDirectory[:(len(backupDirectory)-1)]

    with open("%s/BACKUP_COMPLETE" % backupDirectory, "r") as f:
        buf =
        backupDictionary = json.loads(buf)
            finalCrc = backupDictionary["Checksum"]
            dbName = backupDictionary["Database_Name"]
            numPartitions = int(backupDictionary["Num_Partitions"])
        except KeyError as e:
            print e
            print "Sentinel File 'BACKUP_COMPLETE' is from unsupported version of backup."
            return -1
    # This is in the crc32c specification, crcmod also has crc32c hardcoded, 
    # so either can be used.
    crc = crcmod.Crc(0x11EDC6F41, rev=True, initCrc=0x00000000, xorOut =0xFFFFFFFF) 
    crclist = ""
    # Process the reference snapshot.
    with open("%s/%s.backup" % (backupDirectory, dbName), "r") as f:
            buf =
    crclist += crc.hexdigest()

    # Process each partition.
    for i in range (numPartitions):
        crc =

        # Process Partition snapshot.
        # Each Partition MUST have a snapshot.
        with open("%s/%s_%d.backup" % (backupDirectory, dbName, i), "r") as f:
            buf = 
        crclist += crc.hexdigest()
        # Snapshots and Columns are checksummed with seperate CRC's.
        crc =

        # To emulate a do while loop in Python.
        tarFound = True

        # If the columnar blobs is non empty, append the crc to the list.
        columnCrc = False 
        j = 0

        # Process all tarballed columnstore files.
        # NOTE: Even if a database has a columnstore, this does not imply 
        # each partition has columnar blobs. The data might exist in the 
        # rowstore snapshot or might be skewed such that all rows exist 
        # in other partitions.
        while tarFound:
                with open("%s/%s_%d.backup_columns%d.tar" % (backupDirectory, dbName, i,j) , "r") as f:
                    buf = 
                    j += 1
                    columnCrc = True
            except IOError as e:
                if e.errno == errno.ENOENT:
                    tarFound = False
                    assert e
        if columnCrc: 
            crclist += crc.hexdigest()
    # CRC's will be of different case, make both uppercase.
    if crclist != finalCrc.upper():
        print "Crc calculated from directory:" + crclist
        print "Crc in backup file :" + finalCrc
        print "Crcs do not match!"
        return -1
    return 0

if __name__ == '__main__':
    backupDirectory = sys.argv[1]
    if len(sys.argv) != 2:
        print "Incorrect usage: please include just the directory where the backup is located."

Related Topics

Was this article useful?