You are viewing an older version of this section. View current production version.

BACKUP DATABASE

Info

In MemSQL Helios, backing up or restoring a database to/from a local filesystem is not supported. Also, in MemSQL Helios, making or restoring an incremental backup is not supported.

Backs up a database. The backup files are saved on the master aggregator and on the leaves.

Syntax

Backup to a Local or Network Drive

BACKUP [DATABASE] db_name TO "backup_path" [WITH { INIT | DIFFERENTIAL }]

Backup to S3

BACKUP [DATABASE] db_name TO S3 "bucket/path" [CONFIG configuration_json] CREDENTIALS credentials_json

configuration_json:
'{"region":"your_region",
  "endpoint_url":"http://other_endpoint"}'

credentials_json:
'{"aws_access_key_id": "replace_with_your_access_key_id",
  "aws_secret_access_key": "replace_with_your_secret_access_key",
  ["aws_session_token": "replace_with_your_temp_session_token",]
  ["role_arn":"replace_with_your_role_arn"]
}'

Backup to an S3 Compatible Storage Provider

BACKUP [DATABASE] db_name TO S3 "bucket/path" [CONFIG configuration_json] CREDENTIALS credentials_json

configuration_json:
'{"endpoint_url":"http://storage_provider_endpoint"}'

credentials_json:
'{"aws_access_key_id": "replace_with_your_access_key_id",
  "aws_secret_access_key": "replace_with_your_secret_access_key"
}'

Backup to Google Cloud Storage using the S3 Interface

BACKUP [DATABASE] db_name TO S3 "bucket/path" [CONFIG configuration_json] CREDENTIALS credentials_json

configuration_json:
'{"compatibility_mode": true,
  "endpoint_url": "https://storage.googleapis.com"}'

credentials_json:
'{"aws_access_key_id": "replace_with_your_google_access_key",
  "aws_secret_access_key": "replace_with_your_google_secret_key"
}'

See the remarks for details on the aws_access_key_id and aws_secret_access_key.

Backup to Azure Blobs

BACKUP DATABASE db_name TO AZURE "container/blob-prefix" CREDENTIALS credentials_json

credentials_json:
'{"account_name": "your_account_name_here",
  "account_key": "your_account_key_here"
}'

Remarks

  • db_name is the name of a MemSQL database.
  • The BACKUP DATABASE command writes a consistent snapshot of the database to disk. This means a backup stores the data in the database as it existed at the time the BACKUP DATABASE operation started (all committed transactions at the time the BACKUP DATABASE started will be stored in the backup). Backup files containing the database’s reference tables are saved on the master aggregator. Backup files are also saved on the leaves.
Info

Backing up a database does not backup the users and grants for objects in the database. You must do this as a separate action. In addition, backing up a database does not back up engine variables, so the user must back up the memsql.cnf file and output of SHOW _SYNC VARIABLES LIKE '%' command from each node.

  • A backup of system databases cannot be created.
  • BACKUP DATABASE is an online operation, which means that writes will continue to run while the BACKUP DATABASE is in progress. Any writes executed after the BACKUP DATABASE has started will not be reflected in the stored backup.
  • All partitions of the database need to be in the online or replicating state to run a BACKUP DATABASE operation. The backup file is created on the server and not the client, unlike mysqldump.
  • This command must be run on the master aggregator node (see Node Requirements for MemSQL Commands).
  • The aggregator needs to briefly block new queries from running across the cluster to start a BACKUP DATABASE. Once the BACKUP DATABASE is running, queries can run against the cluster normally. If there is a long-running write query executing at the time a BACKUP DATABASE operation is run, the BACKUP DATABASE will wait for the query to finish.
  • MemSQL does not support BACKUP DATABASE on remote databases.
  • Use the [WITH { INIT | DIFFERENTIAL }] clause to make an incremental backup. If you don’t specify this clause, a full backup is made. An incremental backup cannot be made on S3 or Azure.

Back up to a Local or Network Drive

  • The backup_path is relative to the directory stored in the datadir engine variable, by default. You can include / at the front of the backup_path to override the default and use an absolute path.

Backup to S3

  • BACKUP ... TO S3 writes the backup files to the specified S3 bucket.

  • MemSQL backup will add db_name.backup to the path and put the objects in the path path/db_name.backup/.

  • bucket/path/db_name.backup must not currently exist; otherwise, an error will be returned.

  • aws_session_token is optional. This is only needed if your AWS account uses the AWS Security Token Service.

  • region is optional. It is the AWS region where S3 bucket resides.

  • endpoint_url is optional.

  • role_arn is optional and can be used instead of specifying aws_access_key_id and aws_secret_access_key values. This is only needed if your AWS security is setup to require a role.

  • Backup to a S3 bucket requires the following permissions:

    s3::GetObject
    s3::PutObject
    s3::ListBucket
    
  • You should not specify the compatibility_mode setting, as you would when backing up to Google Cloud Storage using the S3 interface.

Backup to an S3 Compatible Storage Provider

  • BACKUP ... TO S3 with the endpoint_url set to the URL of a storage provider creates a S3 compatible backup at the storage provider.
  • MemSQL backup will add db_name.backup to the path and put the objects in the path path/db_name.backup/.
  • bucket/path/db_name.backup must not currently exist; otherwise, an error will be returned.
  • aws_access_key_id is optional. It is the access key id for accessing the storage provider.
  • aws_secret_access_key is optional. It is the secret access key for accessing the storage provider.
  • You should not specify the compatibility_mode setting, as you would when backing up to Google Cloud Storage using the S3 interface.

Backup to Google Cloud Storage using the S3 Interface

  • BACKUP ... TO S3 with the endpoint_url set to https://storage.googleapis.com and compatibility_mode set to true creates a Google Cloud Storage backup that is closely compatible with S3. Compatibility mode disables certain optimizations that work only with native S3.
  • MemSQL backup will add db_name.backup to the path and put the objects in the path path/db_name.backup/.
  • bucket/path/db_name.backup must not currently exist; otherwise, an error will be returned.
  • For aws_access_key_id, provide your Google access key, which is a 24 character alphanumeric string. For aws_secret_access_key, provide your Google secret key, which is a 40 character Base-64 encoded string that is linked to a specific Google access key. For more information on these keys, see the Google Cloud Storage documentation on migrating keys.

Back up to Azure Blobs

  • BACKUP ... TO AZURE writes the backup files to the specified Azure container.
  • MemSQL backup will append db_name.backup to the blob prefix and and will put objects into the “path” blob-prefix/db_name.backup/.
  • container/blob-prefix/db_name.backup must not currently exist; otherwise, an error will be returned.

Examples

Making an Incremental Backup

The Backing Up and Restoring Data topic provides an example of making incremental backups.

All of the examples shown below make full backups.

Local Backup

The following example backs up a database to /my-backup-dir.

BACKUP DATABASE memsql_demo to "/my-backup-dir";

The following example backs up a database to the directory stored in the datadir engine variable.

BACKUP DATABASE memsql_demo to "./";

The following example backs up a database to the my-relative-backup-dir directory, which is relative to the directory stored in the datadir engine variable. In this example, if datadir contained the directory /var/lib/memsql/leaf-3308-aaaaaaaaaa/data, the backup would be saved to the /var/lib/memsql/leaf-3308-aaaaaaaaaa/data/my-relative-backup-dir directory.

BACKUP DATABASE memsql_demo to "./my-relative-backup-dir";

S3 Backup

The following example backs up a database to an S3 bucket.

BACKUP DATABASE memsql_demo TO S3 "backup_bucket/backups/6_1_2018"
  CONFIG '{"region":"us-east-1"}'
  CREDENTIALS '{"aws_access_key_id":"your_access_key_id","aws_secret_access_key":"your_secret_access_key"}';

The following example backs up a database to an S3 bucket using an Amazon Resource Name (ARN) for AWS Identity and Access Management (IAM).

BACKUP DATABASE mydatabase TO S3 'my-bucket/memsql-backup/'
  CONFIG '{"region":"us-east-1"}'
  CREDENTIALS '{"role_arn": "arn:aws:iam::<AccountID>:role/EC2AmazonS3FullAccess"}';

Backup to Google Cloud Storage using the S3 Interface

The following example uses the S3 interface to back up a database to a Google Cloud Storage bucket.

BACKUP DATABASE memsql_demo TO S3 "backup_bucket/backups/6_1_2018"
  CONFIG '{"compatibility_mode":true,"endpoint_url":"https://storage.googleapis.com"}'
  CREDENTIALS '{"aws_access_key_id":"your_google_access_key","aws_secret_access_key":"your_google_secret_key"}';

Azure Blobs Backup

The following example backs up a database to an Azure bucket.

BACKUP DATABASE memsql_demo TO Azure "backup_container/backups/6_1_2018"
CREDENTIALS '{"account_name":"your_account_name_here","account_key":"your_account_key_here"}';

Related Topics