Go Back   HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials > Linux Forums > Installation/Configuration

Do you like HowtoForge? Please consider supporting us by becoming a subscriber.
Reply
 
Thread Tools Display Modes
  #1  
Old 21st October 2005, 12:40
Ovidiu Ovidiu is offline
Senior Member
 
Join Date: Sep 2005
Posts: 1,258
Thanks: 76
Thanked 23 Times in 19 Posts
Default serverbackup with backup2l

hi guys,

I am using this howto as the basis of my backup: http://wiki.hetzner.de/index.php/Backup2l

Anyone else doing something similar?
I have some questions concerning backup2l maybe someone can answer them.
First of all here is how I understood backup2l. I backs up the locations I specify (i.e. /usr /var /etc ) into a format I specify (i.e. tar.gz ) stores them locally, encrypts the files for the transfer, puts them on the remote server via ncftp and then deletes the encrypted version of the files.

Can someone after reading that howto and maybe the manpage (which I did by the way ;-) explain how exactly the next differential backup is done? I mean does it do a complete backup, compare to the one still locally stored and just transfers the diff oder does it somehow only do a diff backup, maybe it sotred information about the full backup somewhere and uses this data to compare?
Do I have to keep one backup on the Hard disk all the time? it looks to me like 1 backup is always kept on hd. also severall months ago when doing my first tests I had it running so that if one backup from lets say 2 weeks ago was outdated and deleted a new one was performed I mean there was only a deletion taking place when there was a new backup ready so that I had a minimum of backups I specified in the config. At the moment when backups reach a certain age like specified by me, all old ones will be deleted and I am left with only one full backup (the latest one)..

maybe someone can share his configuration with me or give me some hints.
Reply With Quote
Sponsored Links
  #2  
Old 22nd October 2005, 12:50
Ovidiu Ovidiu is offline
Senior Member
 
Join Date: Sep 2005
Posts: 1,258
Thanks: 76
Thanked 23 Times in 19 Posts
Default

solved the problem that backups were deleted when a new cycle was started. I changed the following settings:
max level to 1
max per level to 6
max full to 4
generations to 4

this means I will get a full backup, then 6 diff backups then another full one then 6 diffs and so on. the last 4 full backups and 6*4 diff backups will always be avilable which roughly equals one month of backup.
still I am wondering if the backups really have to be lying around on my server even if I already transfered them to my backupspace?

@falko: how should I change this so my sql dumb will also be selected:
FILES=`find . -name 'all.*' -newer timestamp ! -type d` at the moment this selects only the backups without the sqldumb
Reply With Quote
  #3  
Old 22nd October 2005, 16:11
falko falko is offline
Super Moderator
 
Join Date: Apr 2005
Location: Lneburg, Germany
Posts: 41,701
Thanks: 1,900
Thanked 2,735 Times in 2,571 Posts
Default

Quote:
Originally Posted by Tenaka
how should I change this so my sql dumb will also be selected:
FILES=`find . -name 'all.*' -newer timestamp ! -type d` at the moment this selects only the backups without the sqldumb
You could simply rename your sql dump to something like all.sqldump.sql.
__________________
Falko
--
Download the ISPConfig 3 Manual! | Check out the ISPConfig 3 Billing Module!

FB: http://www.facebook.com/howtoforge

nginx-Webhosting: Timme Hosting | Follow me on:
Reply With Quote
  #4  
Old 22nd October 2005, 18:07
Ovidiu Ovidiu is offline
Senior Member
 
Join Date: Sep 2005
Posts: 1,258
Thanks: 76
Thanked 23 Times in 19 Posts
Default

wow, man what a simple solution. I was already intimidated by the above line: I thought about concatenating the search string 'all.*' with another one, already had nightmares about having to read through a lot of man pages,...

*just kidding* but thx for the easy solution, I'll implement it right now

still I can't find an answer to this:
Quote:
still I am wondering if the backups really have to be lying around on my server even if I already transfered them to my backupspace?

Last edited by Ovidiu; 22nd October 2005 at 18:14.
Reply With Quote
  #5  
Old 23rd October 2005, 19:42
Ovidiu Ovidiu is offline
Senior Member
 
Join Date: Sep 2005
Posts: 1,258
Thanks: 76
Thanked 23 Times in 19 Posts
Default

AND btw. all these
max level to 1
max per level to 6
max full to 4
generations to 4
settings only affect local backups, once I put them on the remote storage they accumulate untill its full :-((

looks like reoback is much smarter although its not doing real incremental backups and it seems its no longer active/alive....

any other solutions?
Reply With Quote
  #6  
Old 23rd October 2005, 21:01
falko falko is offline
Super Moderator
 
Join Date: Apr 2005
Location: Lneburg, Germany
Posts: 41,701
Thanks: 1,900
Thanked 2,735 Times in 2,571 Posts
Default

Quote:
Originally Posted by Tenaka
any other solutions?
http://www.howtoforge.com/linux_rdiff_backup
__________________
Falko
--
Download the ISPConfig 3 Manual! | Check out the ISPConfig 3 Billing Module!

FB: http://www.facebook.com/howtoforge

nginx-Webhosting: Timme Hosting | Follow me on:
Reply With Quote
  #7  
Old 23rd October 2005, 21:54
killfrog killfrog is offline
Junior Member
 
Join Date: Oct 2005
Posts: 17
Thanks: 0
Thanked 0 Times in 0 Posts
Default Nfs

I was about to suggest that you set up an NFS partition on the backup server, but I think the tutorial falko gave you is quite good.
Anyway, you could set up an NFS partition in your backup server that is mounted in /backup dir, for example, and then to set it up in the server you want to back up, so it connects to that folder (in NFS the folder looks to the server as it was a local folder, then it could make the diff comparision for incremental backups as is set up by you in the backup program.
Ziv
Reply With Quote
  #8  
Old 23rd October 2005, 23:53
Ovidiu Ovidiu is offline
Senior Member
 
Join Date: Sep 2005
Posts: 1,258
Thanks: 76
Thanked 23 Times in 19 Posts
Default

thx for the input BUT I only have backupspace on an ftp-server given to me by my server provider. I have seen a lot of rdiff howtos on the net, but all needed a setup on the backup seerver which I guess is not possible in my case (using free backup space from strato) AND as far as I have understood to be able to mount NFS this needs to be setup on the backup server as well. Unfortunately as far as I am informed (mind though I might be wrong) strato only gives away ftp storage space...

talking about reobackup - I guess I'll have to look up that howto again.
Reply With Quote
  #9  
Old 18th July 2009, 19:04
siggma siggma is offline
Junior Member
 
Join Date: Jul 2009
Posts: 4
Thanks: 0
Thanked 0 Times in 0 Posts
Default

Old thread, new issue:

Greetings.

I recently upgraded to a multi core processor and have been looking for a way to leverage multiple CPU's when writing a backup. I found PIGZ, a multi threaded compressor that seems to work fine here: http://www.zlib.net/pigz/

However, it does not play nice with the tar cfz or xzf command options. It leaves leading "/" on filenames in the archive causing restore to fail. To remedy this I've created a driver that uses pipes tar output through pigz to create the archive. Below is the driver. It works OK for me.

Code:
USER_DRIVER_LIST="DRIVER_TAR_GZ_PIGZ"

DRIVER_TAR_GZ_PIGZ () {
# NOTES: USE ONLY WITH MULTI CORE CPU
# REQUIRES YOU TO DOWNLOAD AND COMPILE PIGZ
# http://www.zlib.net/pigz/
# Copy it to your system path somewhere; eg /usr/bin/pigz
# Create uses PIGZ for threaded, multi file compression
# Extract uses standard tar xz so you can restore without pigz
    case $1 in
        -test)
            require_tools tar gzip pigz cat
            echo "ok"
            ;;
        -suffix)
            echo "tgz_pigz"
            ;;
        -create) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file       
            tar cf - --files-from=$4 | pigz --best > $3 2>&1
            ;;
        -toc) # Arguments: $2 = BID, $3 = archive file name
#         Using tar tz  for the toc verifies the PIGZ archive is usable
#         in case you don't have pigz installed when you restore
            cat $3 | tar tz | sed 's#^#/#'
            ;;
        -extract) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
#         The next line should work to decompress using PIGZ but it's not well tested
#         cat $3 | pigz -d | tar x --same-permission --same-owner --files-from=$4 2>&1
            cat $3 | tar xz --same-permission --same-owner -T $4 2>&1
            ;;
    esac
}
Updated driver script available here: http://www.trbailey.net/tech/backup2l.html

Last edited by siggma; 19th July 2009 at 17:55. Reason: Added link
Reply With Quote
  #10  
Old 18th July 2009, 22:48
killfrog killfrog is offline
Junior Member
 
Join Date: Oct 2005
Posts: 17
Thanks: 0
Thanked 0 Times in 0 Posts
 
Default

You should try with "tar cfzP" and then "tar xzfB" then, this always solved me the leading "/" problems
Reply With Quote
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 11:15.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.