If you have a real working server, my general advice is to install BackupPC on another (backup) server, and backup the /var/clients, /var/www, /var/vmail, /home, /etc, /usr/loca/ispconfig folders (didn't i forget anything?). Or easier just to backup everything from the root, excluding some useless stuff like /var/lib/mysql, /var/log and /var/cache.
To store the database, just create its sql export into /var/clients/sql folder before every backup with tcpdump (there is an option in BackupPC to run any script before backup). Then BackupPC will take those new files created by tcpdump.
In case of big databases a small optimization is desirable. Here is my script to optimize the disk space usage on backups. It exports all tables of all the databases into separate files. It should be run by BackupPC every time.
$ cat /var/clients/sql/export_sql.sh
#create this dir first!
#create a user with global read permissions in your mysql
DB_PASS="put its password there"
# here we create the list of databases, and exclude some of them which we don't need to backup
DATABASES=`mysql -u$DB_USER -p$DB_PASS --default-character-set=utf8 --batch --skip-column-names --execute="SHOW DATABASES" | grep -v "test" | grep -v "prosearch" |sort`
# We walk through each database and take the names of tables.
for DBNAME in `echo $DATABASES`
mkdir $DB_DIR > /dev/null 2>&1
# first we delete all the old sql exported in previous backup
rm -f $DB_DIR*.sql.bz2
TABLES=`mysql -u$DB_USER -p$DB_PASS --default-character-set=utf8 --batch --skip-column-names --execute="SHOW TABLES" $DBNAME |sort`
for TableName in `echo $TABLES`
# Than we backup each table.
/usr/bin/mysqldump -u$DB_USER -p$DB_PASS --default-character-set=utf8 --result-file=$DB_DIR/$TableName.sql $DBNAME $TableName
# and bzip each sql file
It's extremely useful if it's needed to restore one table or its part. On huge tables it's simply saves time on cut of long sql files.