MySQL Backup: Table By Table Backup With Auto Rotation, For Easy Restoration Of Partial/Full Database

Here is a MySQL backup script which can take table by table backups (individual backup files of each table of each database) in a compressed format. It also provides an automatic rotation of old backup files.  The backup script handles innodb and myisam tables separately.

 You have to set the following variables prior to running the backup script.


 The database user who has access to all databases and its tables. I used "root" for my deployment.


 Password of the above user, prefixed with "-p". For example if the password is Secret, then you should write the password as "-pSecret".


 File to which the backup log will be written. It should be writable by the user who is running the script.


 The backup folder. It should be writable by the user who is running the script.


 Backup rotation period. +30 is 30 days.


The Backup Script

# Database Backup script.
# Created By:    Mohammed Salih
#                 Senior System Administrator
#                Date: 21/06/2007
# Database credentials
#Please append password in the xxxxx section below, note that there is
# no space between -p and xxxxx
# Get list of Databases except the pid file
DBS_LIST=$(echo "show databases;"|mysql -u $DB_USER $DB_PASS -N)
# Log file
# Backup Base directory
# Backup rotation period.
# From here, only edit if you know what you are doing.
# Check if we can connect to the mysql server; otherwise die
if [ ! "$(id -u -n)" = "mysql" ]; then
        echo -e "Error:: $0 : Only user 'mysql' can run this script"
        exit 100
PING=$(mysqladmin ping -u $DB_USER $DB_PASS 2>/dev/null)
if [ "$PING" != "mysqld is alive" ]; then
        echo "Error:: Unable to connected to MySQL Server, exiting !!"
        exit 101
# Backup process starts here.
# Flush logs prior to the backup.
mysql -u $DB_USER $DB_PASS -e "FLUSH LOGS"
# Loop through the DB list and create table level backup,
# applying appropriate option for MyISAM and InnoDB tables.
for DB in $DBS_LIST; do
    DB_BKP_FLDR=$BASE_BAK_FLDR/$(date +%d-%m-%Y)/$DB
    [ ! -d $DB_BKP_FLDR ]  && mkdir -p $DB_BKP_FLDR
    # Get the schema of database with the stored procedures.
    # This will be the first file in the database backup folder
    mysqldump -u $DB_USER $DB_PASS -R -d --single-transaction $DB | \
            gzip -c > $DB_BKP_FLDR/000-DB_SCHEMA.sql.gz
    #Get the tables and its type. Store it in an array.
    table_types=($(mysql -u $DB_USER $DB_PASS -e "show table status from $DB" | \
            awk '{ if ($2 == "MyISAM" || $2 == "InnoDB") print $1,$2}'))
    # Loop through the tables and apply the mysqldump option according to the table type
    # The table specific SQL files will not contain any create info for the table schema.
    # It will be available in SCHEMA file
    while [ "$index" -lt "$table_type_count" ]; do
        START=$(date +%s)
        TYPE=${table_types[$index + 1]}
        echo -en "$(date) : backup $DB : $table : $TYPE "
        if [ "$TYPE" = "MyISAM" ]; then
            DUMP_OPT="-u $DB_USER $DB_PASS $DB --no-create-info --tables "
            DUMP_OPT="-u $DB_USER $DB_PASS $DB --no-create-info --single-transaction --tables"
        mysqldump  $DUMP_OPT $table |gzip -c > $DB_BKP_FLDR/$table.sql.gz
        index=$(($index + 2))
        echo -e " - Total time : $(($(date +%s) - $START))\n"
# Rotating old backup. according to the 'RM_FLDR_DAYS'
if [ ! -z "$RM_FLDR_DAYS" ]; then
    echo -en "$(date) : removing folder : "
    find $BASE_BAK_FLDR/ -maxdepth 1 -mtime $RM_FLDR_DAYS -type d -exec rm -rf {} \;


The Backup Location

For example, if you have taken the backup of "bigdb" on 1st Jan 2007, then the backup will be kept in 



The Restore Script

Following command/script is an example for restoring a database called bigdb for which the backup was taken on 1st Jan 2007.  

cd /backup/01-01-2007/bigdb;

for table in *; do gunzip -c $table | mysql -u root -pSecret bigdb_new; done.

The above command will iterate through the list of files in the directory and restore all the tables to bigdb_new database. It is assumed that you have created the bigdb_new database prior to running the script. 

Share this page:

7 Comment(s)

Add comment

Please register in our forum first to comment.


By: James Day

If you want the data in a file for each table consider using the --tab option instead. Then you can use FLUSH TABLES WITH READ LOCK to get consistent MYSQL and InnoDB dumps or --single-transaction if only InnoDB tables are used. This is described in the manual at .

For faster binary backups you could also take a look at MySQL Enterprise Backup, described at and available for evaluation via E-Delivery at .

The script is insecure, placing the password on the command line where it will be visible with top and other process tools. One of the more secure ways to dio this is to use EXPECT in bash, as illustrated by durden tyler at by this code:

echo "
spawn /bin/bash
send \"mysqldump -u root -p database_1 my_table\r\"
expect \"password:\"
send \"$PASS\r\"
expect \"$ \"
" | expect >backup.sql

James Day, MySQL Principal Support Engineer, Oracle UK

By: Bill Karwin

This script takes backups one table at a time, which carries significant risk that you will get an inconsistent backup.

For example, if table LineItems references table Orders, and your backup script is running concurrently with a user who is canceling an order, your backup will include the LineItems rows for the canceled order, but some seconds later when the script reaches the parent table Orders, the parent row for the deleted order is gone.  In fact, the dependent LineItems rows are gone by that time too, but your script backed up that table before the rows were deleted.  Thus your backup contains orphan rows that were never orphaned in the live database.

Also note that the --single-transaction option is important to preserve consistency when mysqldump outputs multiple tables.  If you back up only one table per invocation of mysqldump, using --single-transaction is superfluous.

I assume you wrote this script to run table-by-table backups because backing up in one command locks the databases for too long, and you need to allow concurrent users of the database.  But by backing up table-by-table, you must lock out concurrent changes to the database while you're running the script, or else get inconsistent backups.  In that case, there's no difference between locking the database for a long-running mysqldump command, or locking the database for the duration of the series of shorter-running mysqldump commands.

Your script does create one output file per table, which mysqldump does not do by default.  If you need the ability to restore one table at a time, I suggest you could run one mysqldump command, and pipe its output to a Perl script that matches CREATE TABLE patterns, closes the current output file, and opens a new output file.

I also suggest you take a look at Percona XtraBackup, which performs a consistent backup concurrently with active use of the database, runs much faster than mysqldump, and supports incremental backups and streaming.  Percona XtraBackup is a free tool from my employer, Percona Inc.  See

By: Andrew

This does not provide consistent backups and is pretty useless for any backup that has even remote data integrity requirements.  Only use this if you don't care about your data.

By: Shlomi Noach

What is the purpose for flushing the logs?

The backup script has no notion of atomicy; tables are exported one after the other, which is only good for anyone not caring about consistency & integrity of the data. Unless, of course, no one writes to the database.

What is the purpose of the script in the first place? What is the reason I would want to use it?

Forgive me if I'm too critic.


There is another script which I am often using as well:

It has some more features but can not backup per table. So all in all it depends on what you need.

By: Php Website Developer

Great! This is a fully informative tutorial about to create a database backup. It just made my work easier. It saved lot of time. 



By: Laurent

It does the job, but even if it might sound like I'm preaching for my church, Please don't forget to store your backups to a remote storage! People often forget this step.I've written an article on how to do this (, you can't do it on your own, no service needed, super easy.Laurent -