I have a server running innodb. It's about 40G in size. It's getting hit with reads and writes pretty often. I'd like to minimize its downtime. I'm looking at ways to avoid taking it down and running mysqldump on a regular basis. As discussed here: mysql_database_replication 1. My basic plan was to sort out replication and back up the slave's database. From what I've read online, there seems to be some school of thought that replication is not reliable enough. i.e. the slave and master will drift and the data will not be restored properly in the case of losing the master. Is this a valid concern? 2. Is it feasible to setup this plan on a single server running two mysqlds and replicating itself on localhost? I would then copy the backups to a remote or external drive. 3. Perhaps I can take some shortcuts in my case. Let's say I'm not concened about the data being exact. If I have 1 billion rows, I want to backup everything, but I'm only really concerned about the last 100k rows say as they are currently most active. That is, the older rows are unlikely to change; and even if they did vary slightly, that would be okay. Is it feasible to have a script that would passively dump rows bit by bit in the background and rebuild the database slowly over time? Perhaps dumping 100 rows every 5 minutes say until the whole database was parsed for example.