Disaster Recovery Mechanism

Discussion in 'Server Operation' started by barney.parker, Mar 30, 2009.

  1. barney.parker

    barney.parker New Member

    Hi all,

    I am a relative linux newbie, and have been surprised at the lack of documentation regarding live system backups.

    I come from a Windows background (i know, if you've got a complaint, see the management, not me!) and am trying to sell the idea of switching to Linux. Currently the big issue is disaster recovery. If thats not a simple proposition, the whole deal is off!

    What I need is a simple system to back up my demo server (Ubuntu 8.04 running a simple LAMP stack).

    The disk is currently using around 4GB of space, so should fit as a RAW image to an 8GB flash drive, preferably 3-4 images with compression would be good!

    I have tried a few dd commands, but seem to be missing something as my 4GB of data fills the drive in no time!

    The command i am using is:-

    dd if=/dev/sda conv=sync,noerror bs=64k | bzip2 -9 > /mnt/sdba/sda.img.bz2

    i am using bzip2 for higher compressions ratios (quiet test server, so no hassles with excessive load)

    it seems to me that it is actually trying to image the ENTIRE disk (~70GB). My initial thought was that the empty areas would be zeros, therefore compress to a tiny size, hoever it seems this is not the case.

    I have been told that the command

    dd if=/dev/zero of=/tmp/delete.me; rm /tmp/delete.me;

    would set the free space to zero, however to my mind it's asking dd to create an image file from a device meant to provide zero's. Surely this won't zero anything, unless i am missing something!

    essentially i would like to be able to end up with a cron job which nightly backs up my disk to a flash drive, and can be restored (possibly using a LiveCD) in the event of a disaster. i want to be able to test this on a VM just to be sure.

    I think my problem is using the wrong terminology when googling, but i just can't seem to get anywhere with this!!!!

    any help would certainly be appreciated!
     
  2. falko

    falko Super Moderator ISPConfig Developer

    If you want image-based backups, take a look at SystemImager, CloneZilla, or Ghost4Linux. You can find tutorials about these solutions in the Backup category. :)
     
  3. barney.parker

    barney.parker New Member

    Thanks for the reply, but what I have taken a look at those, and they all appear to be be essentially off-line backups.

    In our normal operations we have users working 24/7 so a backup system needs to be run on a live system. I can't seem to find anything (or i am missing something!!!) that will run in this way.

    I would be happy with a paid solution also if thats the best way forward?

    Thanks
     
  4. deconectat

    deconectat New Member

    What about rsync? Take a look here and here.
     
  5. barney.parker

    barney.parker New Member

    Now rSync seem to be heading in the right direction!!!

    My only problem is that rSync seems to rely on an extra server. I guess as long as it's storing to an external device of some kind that would be ok...

    An issue i have is insurance. To keep our insurance valid, backups must be stored off-site. They must be kept in a fire-proof safe, and must be maintained (monthly and yearly tapes) for a period of 5 years.

    Eventually I will move over to using a tape drive. The disks are never going to be more that 100GiB so an HP Ultrium drive will work fine.

    I would prefer not to have the extra overhead of another server for backups if possible, which was why I was hoping to be able to backup to a flash drive of other removable storage media.

    As mentioned above this is an attempt to sell Linux to the management, but before i can do so i need to ensure an entire setup works pretty much as our WinTel setup does from a disaster recovery and fault tollerance point of view. The Fault tollerance isn't an issue (clustered MySQL, load balanced Apache et al) but the disaster recovery seems to be an issue!

    Out of interest, how are other users doing this in production environments? Is there an alternative to my way of thinking that i've never come accross in the MS world?

    Thanks
     
  6. falko

    falko Super Moderator ISPConfig Developer

  7. deconectat

    deconectat New Member

    You could write a small bash script to copy just the files you need to your external drive and run it via cron if you don't need a disk image.
    In this case, if your hard drive fails, you'll have to install linux by hand and then copy the files back. If you loose files, you can just copy them back to your server.

    You can use rsync to copy the files to your workstation and then write them to a dvd for example. You don't need a dedicated server for this.
     
  8. Jorem

    Jorem New Member

    Easy option: Install Webmin and use the Filesystem Backup utility. Easy backups on your local drive.

    If you need to get it to your home pc, just use SyncBack (Windows) or some other other tool and sync with ftp to your home pc.
     
  9. choogendyk

    choogendyk New Member

    Wow. A lot of different advice. I'm sure you're having fun sorting it out.

    First of all, if you are going to do live backups or images of the root system, you will need some sort of snapshots. They reduce the risk of inconsistencies in the backup image. Only by shutting down to single user or booting off a CD can you completely eliminate the risk. So, I'll second the recommendation for LVM snapshots. On Solaris, I use fssnap for ufs snapshots, and zfs for its own snapshots.

    rsync can work with another server, another drive, or your usb flash drive. So, if you chose that route, you could rsync a snapshot to the flash drive. Or, if you have a snapshot, just tar the snapshot.

    I know someone who uses a loopback device to create and mount a disk image that is just the size of the flash drive. He then does installs and configurations on the loopback device and dd's it out to the flash drive. The flash drive ends up actually being bootable for installing or recovering systems.

    MySQL, or any other database, is a special case. You have to follow proper procedures to backup databases. You may get away with backing up the file space where it is stored, if the system is totally quiet. But, if it is active, there will be inconsistencies and things held in memory that haven't been flushed. There are standard procedures to lock tables and dump MySQL. Then you backup the dump. A packaged solution that takes care of all the details for you (and is open source) is ZRM (Zmanda Recovery Manager) for MySQL -- http://www.zmanda.com/backup-mysql.html.

    Typically, people separate different aspects of the problems you are facing and use specific tools to deal with them. So, a system imaging or jumpstart approach might be used to get the base system back up with a known configuration. Then a general backup program (such as Amanda -- http://amanda.zmanda.com/) to get the user data and specific customizations back. This is where the tape backups with off site tapes and history come in. Then something to recover the state of the databases. These would typically be dumped to disk and then backed up to tape with the general backup program. Recovery would reverse the process. Recover from tape and then load the database dump back to get the state.

    This doesn't necessarily give you your total solution. But maybe it gives you some conceptual idea of where you might be going.
     
  10. id10t

    id10t Member

    For our online course delivery system, we had RAID 5 set up for the actual data storage, and the system rsync'd over ssh to a vmware machine every few hours.

    Plain old drive failure is taken care of with the raid, and the one time the raid controller died we were able to flop over to the vm with only a few hours of data loss.

    Worked great, had close to 200gb of data on it...
     

Share This Page