Shell Script To Back Up All MySQL Databases, Each Table In An Individual File And Upload To Remote FTP

Want to support HowtoForge? Become a subscriber!
 
Submitted by marchost (Contact Author) (Forums) on Tue, 2008-10-07 13:15. :: Backup | MySQL

Shell Script To Back Up All MySQL Databases, Each Table In An Individual File And Upload To Remote FTP

This script will create a backup of each table in every database (one file per table), compress it and upload it to a remote ftp.

First create a mysql user with select and lock table privileges (or use root).

Then use this script in your crontab every hours:

#!/bin/sh
# System + MySQL backup script
# Copyright (c) 2008 Marchost
# This script is licensed under GNU GPL version 2.0 or above
# ---------------------------------------------------------------------

#########################
######TO BE MODIFIED#####

### System Setup ###
BACKUP=YOUR_LOCAL_BACKUP_DIR

### MySQL Setup ###
MUSER="MYSQL_USER"
MPASS="MYSQL_USER_PASSWORD"
MHOST="localhost"

### FTP server Setup ###
FTPD="YOUR_FTP_BACKUP_DIR"
FTPU="YOUR_FTP_USER"
FTPP="YOUR_FTP_USER_PASSWORD"
FTPS="YOUR_FTP_SERVER_ADDRESS"

######DO NOT MAKE MODIFICATION BELOW#####
#########################################

### Binaries ###
TAR="$(which tar)"
GZIP="$(which gzip)"
FTP="$(which ftp)"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"

### Today + hour in 24h format ###
NOW=$(date +"%d%H")

### Create hourly dir ###

mkdir $BACKUP/$NOW

### Get all databases name ###
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do

### Create dir for each databases, backup tables in individual files ###
  mkdir $BACKUP/$NOW/$db

  for i in `echo "show tables" | $MYSQL -u $MUSER -h $MHOST -p$MPASS $db|grep -v Tables_in_`;
  do
    FILE=$BACKUP/$NOW/$db/$i.sql.gz
    echo $i; $MYSQLDUMP --add-drop-table --allow-keywords -q -c -u $MUSER -h $MHOST -p$MPASS $db $i | $GZIP -9 > $FILE
  done
done

### Compress all tables in one nice file to upload ###

ARCHIVE=$BACKUP/$NOW.tar.gz
ARCHIVED=$BACKUP/$NOW

$TAR -cvf $ARCHIVE $ARCHIVED

### Dump backup using FTP ###
cd $BACKUP
DUMPFILE=$NOW.tar.gz
$FTP -n $FTPS <<END_SCRIPT
quote USER $FTPU
quote PASS $FTPP
cd $FTPD
mput $DUMPFILE
quit
END_SCRIPT

### Delete the backup dir and keep archive ###

rm -rf $ARCHIVED

Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by Anonymous (not registered) on Sat, 2008-10-11 09:27.
# USERNAME
MyUSER="`cat /etc/mysql/debian.cnf | grep user | uniq | awk -F'=' '{print $2}'`"
# PASSWORD
MyPASS="`cat /etc/mysql/debian.cnf | grep password | uniq | awk -F'=' '{print $2}' | \
sed -e 's/^ //'`"
# Hostname
MyHOST="localhost"
# Servername
SERVER="$HOSTNAME"
# mysqldump options
MYSQLDUMPOPTS="--opt"
# Linux bin paths, change this if it can't be autodetected via which command
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
CHOWN="$(which chown)"
CHMOD="$(which chmod)"
GZIP="$(which gzip)"

# Backup Dest directory, change this if you have someother location
DEST="/root/backups"
# Main directory where backup will be stored
MBD="$DEST/mysql"
#create if not existing
if [ ! -e "$MBD" ]; then
        mkdir -p $MBD
fi
# Get data in dd-mm-yyyy format
NOW="$(date +"%m-%d-%Y")"
FNAME="mysql-backup"
# File to store current backup file
FILE=""
# Store list of databases
DBS=""
TMP="/tmp/$NOW"
mkdir -p $TMP
n=5;
# the first archive number must be zero
l=0;

while [ ! $n -le 0 ]; do
        let "m = $n - 1"
        if [ -e $MBD/$FNAME.$m.tar.gz ]; then
                mv $MBD/$FNAME.$m.tar.gz $MBD/$FNAME.$n.tar.gz
        fi
        let "n = $n - 1"
done



# DO NOT BACKUP these databases
# IGGY="test"

[ ! -d $MBD ] && mkdir -p $MBD || :
[ ! -d $TMP ] && mkdir -p $TMP || :
# Only root can access it!
$CHOWN root.root -R $DEST
$CHMOD 600 $DEST

# Get all database list first
DBS="$($MYSQL -u $MyUSER -h $MyHOST -p$MyPASS -Bse 'show databases')"

for db in $DBS
do
   skipdb=-1
   if [ "$IGGY" != "" ];
   then
       for i in $IGGY
       do
           [ "$db" == "$i" ] && skipdb=1 || :
       done
   fi

   if [ "$skipdb" == "-1" ] ; then
       FILE="$db.$SERVER.$NOW.sql"
       # do all inone job in pipe,
       # connect to mysql using mysqldump for select mysql database
       # and pipe it out to gz file in backup dir :)
       $MYSQLDUMP -u $MyUSER -h $MyHOST -p$MyPASS $MYSQLDUMPOPTS $db > $TMP/$FILE
   fi
done
tar czvf $MBD/$FNAME.0.tar.gz $TMP &> /dev/null
rm /tmp/$NOW -r
exit 0
Submitted by Dave (not registered) on Sat, 2008-10-11 04:10.

I would recommend AutoMySQLBackup as a better alternative.  You can find more about it on SourceForge at:

http://sourceforge.net/projects/automysqlbackup/

Submitted by marchost (registered user) on Fri, 2008-10-10 13:30.

To fix the problem related with the consistency between the tables (read danielj comment), I recommend replacing those lines :

### Create dir for each databases, backup tables in individual files ###
  mkdir $BACKUP/$NOW/$db
  FILE=$BACKUP/$NOW/$db/$db.sql.gz
  echo $i; $MYSQLDUMP --add-drop-table --allow-keywords -q -c -u $MUSER -h  $MHOST -p$MPASS $db $i | $GZIP -9 > $FILE
done

### Compress all tables in one nice file to upload ###

Instead of creating a backup of individual tables, it will create individual databases backups. You can open the .sql file afterward in your favorite text editor and retreive tables data if needed.

Submitted by Webmaster (not registered) on Fri, 2008-10-10 03:09.

    Hi Marc,

Many Thanks for sharing your knowledge in Shell Scripting as this is perfectly working fine.

At my first try, i got two errors stating that the "which mysql" and "which mysqldump" are not found. So basically, I manually edit the 2 lines and instead i put the real path of those two commands. I am using LAMPP by the way.

I also put it in my Cron.hourly and tested it works fine. I also checked my FTP Directory and it works fine.

I am also in root privilege. 

I am using Fedora 6 and I am behind a firewall/proxy but no problem was found.

Once again, thanks for this great script. Hope more people will enjoy using this script.


Submitted by Anonymous (not registered) on Tue, 2008-10-07 14:04.

I think its a very poor script.

Where is the retention?

What happen with the writing in HD?. You have to be careful with your HD and the bigs databases.

And the bandwith? Why not use a temporal directory and make a synchronization to a final directory to save some of bandwith in the ftp connection? Or much more better.... why ftp?? you can save more time an bandwith if you make rsync

What happen with big databases? You spend more time, CPU and bandwith in make the dumps all hours and move it to the ftp that let the applications runs through db

I think that is a script that make the job but....  is very poor

Submitted by marchost (registered user) on Tue, 2008-10-07 23:10.

Quote from the famous movie Snatch :

"...if I throw a dog a bone, I don't want to know if it tastes good or not..."

This works perfect for me, I just wanted to keep a backup of my important database, they are quite small (>50MB) but contains very important data.

If you think its poor, why dont you open a text editor and write a perfect script you could share with people.

Marc

Submitted by danielj (not registered) on Tue, 2008-10-07 12:07.
This will create an inconsistent dump, since table A may be changed while table B is being dumped. This might for example break foreign key constraints. This is not necessarily a problem but should be kept in mind.
Submitted by marchost (registered user) on Tue, 2008-10-07 23:17.

Hi, indeed you are right. Table will not be 100% consistent with each others but for my usage it doesnt create problems.

I use this with replication. When I do a checksum of my databases to check consistency with Maatkit (http://blogama.org/node/39) I get results table by table.

Most of the time its only a table at a time that is inconsistent (if it happend!) so I didnt want a complete dump of the database.