Comments on MySQL Backup: Table By Table Backup With Auto Rotation, For Easy Restoration Of Partial/Full Database
MySQL Backup: Table By Table Backup With Auto Rotation, For Easy Restoration Of Partial/Full Database Here is a MySQL backup script which can take table by table backups (individual backup files of each table of each database) in a compressed format. It also provides an automatic rotation of old backup files. The backup script handles innodb and myisam tables separately.
7 Comment(s)
Comments
If you want the data in a file for each table consider using the --tab option instead. Then you can use FLUSH TABLES WITH READ LOCK to get consistent MYSQL and InnoDB dumps or --single-transaction if only InnoDB tables are used. This is described in the manual at http://dev.mysql.com/doc/refman/5.5/en/mysqldump-delimited-text.html .
For faster binary backups you could also take a look at MySQL Enterprise Backup, described at http://mysql.com/products/enterprise/backup.html and available for evaluation via E-Delivery at https://edelivery.oracle.com/ .
The script is insecure, placing the password on the command line where it will be visible with top and other process tools. One of the more secure ways to dio this is to use EXPECT in bash, as illustrated by durden tyler at http://www.unix.com/shell-programming-scripting/124548-passing-mysql-password-bash-script.html by this code:
#!/usr/bin/bashPASS=blah
echo "
spawn /bin/bash
send \"mysqldump -u root -p database_1 my_table\r\"
expect \"password:\"
send \"$PASS\r\"
expect \"$ \"
" | expect >backup.sql
James Day, MySQL Principal Support Engineer, Oracle UK
This script takes backups one table at a time, which carries significant risk that you will get an inconsistent backup.
For example, if table LineItems references table Orders, and your backup script is running concurrently with a user who is canceling an order, your backup will include the LineItems rows for the canceled order, but some seconds later when the script reaches the parent table Orders, the parent row for the deleted order is gone. In fact, the dependent LineItems rows are gone by that time too, but your script backed up that table before the rows were deleted. Thus your backup contains orphan rows that were never orphaned in the live database.
Also note that the --single-transaction option is important to preserve consistency when mysqldump outputs multiple tables. If you back up only one table per invocation of mysqldump, using --single-transaction is superfluous.
I assume you wrote this script to run table-by-table backups because backing up in one command locks the databases for too long, and you need to allow concurrent users of the database. But by backing up table-by-table, you must lock out concurrent changes to the database while you're running the script, or else get inconsistent backups. In that case, there's no difference between locking the database for a long-running mysqldump command, or locking the database for the duration of the series of shorter-running mysqldump commands.
Your script does create one output file per table, which mysqldump does not do by default. If you need the ability to restore one table at a time, I suggest you could run one mysqldump command, and pipe its output to a Perl script that matches CREATE TABLE patterns, closes the current output file, and opens a new output file.
I also suggest you take a look at Percona XtraBackup, which performs a consistent backup concurrently with active use of the database, runs much faster than mysqldump, and supports incremental backups and streaming. Percona XtraBackup is a free tool from my employer, Percona Inc. See http://www.percona.com/software/percona-xtrabackup/
This does not provide consistent backups and is pretty useless for any backup that has even remote data integrity requirements. Only use this if you don't care about your data.
What is the purpose for flushing the logs?
The backup script has no notion of atomicy; tables are exported one after the other, which is only good for anyone not caring about consistency & integrity of the data. Unless, of course, no one writes to the database.
What is the purpose of the script in the first place? What is the reason I would want to use it?
Forgive me if I'm too critic.
There is another script which I am often using as well:
http://sourceforge.net/projects/automysqlbackup/
It has some more features but can not backup per table. So all in all it depends on what you need.
Great! This is a fully informative tutorial about to create a database backup. It just made my work easier. It saved lot of time.
Thanks.
It does the job, but even if it might sound like I'm preaching for my church, Please don't forget to store your backups to a remote storage! People often forget this step.I've written an article on how to do this (https://simplebackups.io/blog/the-ultimate-mysql-database-backup-script/), you can't do it on your own, no service needed, super easy.Laurent - simplebackups.io