View Single Post
  #12  
Old 11th February 2013, 04:44
Chris Graham Chris Graham is offline
Junior Member
 
Join Date: Feb 2013
Posts: 7
Thanks: 0
Thanked 4 Times in 2 Posts
Default

Quote:
Originally Posted by synapse123 View Post
If you have a real working server, my general advice is to install BackupPC on another (backup) server, and backup the /var/clients, /var/www, /var/vmail, /home, /etc, /usr/loca/ispconfig folders (didn't i forget anything?). Or easier just to backup everything from the root, excluding some useless stuff like /var/lib/mysql, /var/log and /var/cache.

To store the database, just create its sql export into /var/clients/sql folder before every backup with tcpdump (there is an option in BackupPC to run any script before backup). Then BackupPC will take those new files created by tcpdump.

In case of big databases a small optimization is desirable. Here is my script to optimize the disk space usage on backups. It exports all tables of all the databases into separate files. It should be run by BackupPC every time.

$ cat /var/clients/sql/export_sql.sh

Code:
#!/bin/sh

#create this dir first!
SQL_DIR=/var/clients/sql/mysql
#create a user with global read permissions in your mysql
DB_USER="backup_reader" 
DB_PASS="put its password there"
umask 0077

# here we create the list of databases, and exclude some of them which we don't need to backup
DATABASES=`mysql -u$DB_USER -p$DB_PASS --default-character-set=utf8 --batch --skip-column-names --execute="SHOW DATABASES" | grep -v "test" | grep -v "prosearch" |sort`

# We walk through each database and take the names of tables.
for DBNAME in `echo $DATABASES`
do
  DB_DIR="$SQL_DIR/$DBNAME/"
  mkdir $DB_DIR > /dev/null 2>&1
# first we delete all the old sql exported in previous backup
  rm -f $DB_DIR*.sql.bz2
  TABLES=`mysql -u$DB_USER -p$DB_PASS --default-character-set=utf8 --batch --skip-column-names --execute="SHOW TABLES" $DBNAME |sort`
  for TableName in `echo $TABLES`
    do
# Than we backup each table.
    /usr/bin/mysqldump -u$DB_USER -p$DB_PASS --default-character-set=utf8 --result-file=$DB_DIR/$TableName.sql $DBNAME $TableName
# and bzip each sql file 
    /usr/bin/bzip2 $DB_DIR/$TableName.sql
    done
done
It's extremely useful if it's needed to restore one table or its part. On huge tables it's simply saves time on cut of long sql files.
You need to be careful using mysqldump, it can take a server down during the duration. Experiments led to find the following parameters are needed...
--skip-lock-tables --quick --lock-tables=false
Reply With Quote