Virtual Machine Replication & Failover with VMWare Server & Debian Etch (4.0) - Page 2
3. Configuring DRBD and creating the replicated filesystem
When installing the drbd0.7 package, only the required module source package is copied to the /usr/src directory. To actually install and configure DRBD you will have to "make" it.
cd /usr/src
tar xzf drbd0.7.tar.gz
cd /usr/src/modules/drbd/drbd
make && make install
Note: If you get this error: "SORRY, kernel makefile not found. You need to tell me a correct KDIR!" then reboot first !
Now we need to configure DRBD to use our separate partition (/dev/sda7) as a DRBD device and then create a filesystem on it.
I suggest moving/renaming the installed drbd.conf and putting our own file in place
mv /etc/drbd.conf /etc/drbd.conf-sample
nano /etc/drbd.conf
You can use this drbd.conf file as a template:
resource vm1 { protocol C; incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f"; startup { wfc-timeout 10; # 10 seconds degr-wfc-timeout 30; # 30 seconds } disk { on-io-error detach; } net { max-buffers 20000; # Play with this setting to achieve highest possible performance unplug-watermark 12000; # Play with this setting to achieve highest possible performance max-epoch-size 20000; # Should be the same as max-buffers } syncer { rate 10M; # Use more if you have a Gigabit network. Speed is in Kylobytes. e.g.: 10M = 10Megabytes group 1; al-extents 257; } on server1 { # Use the EXACT hostname of your server as give by the command "uname -n" device /dev/drbd0; # drbd device ID disk /dev/sda7; # physical disk device , check your partitioning scheme !! address 172.20.20.100:7789; # Fixed IP address of server1 meta-disk internal; # I use internal metadata storage } on server2 { device /dev/drbd0; disk /dev/sda7; address 172.20.20.200:7789; meta-disk internal; } }
NOTE:THIS FILE MUST BE THE SAME ON BOTH SERVERS !
Now we can start the DRBD device and create the filesystem.
On both servers:
modprobe drbd
drbdadm up all
Now we define "server1" as the primary/master server:
On server1:
drbdsetup /dev/drbd0 primary --do-what-I-say
mkfs.ext3 /dev/drbd0
Wait a while to have the "ext3" filesystem created on /dev/drbd0 and then:
drbdadm connect all
And wait for the initial synchronisation to complete. On slower networks, this might take up to a few hours depending on the disksize! You can check the status of the synch with this command:
cat /proc/drbd
Which should give you an output during synch similar to this:
version: 0.7.10 (api:77/proto:74)SVN Revision: 1743 build by phil@mescal, 2005-01-31 12:22:07
0: cs:SyncSource st:Primary/Secondary ld:Consistent
ns:13441632 nr:0 dw:0 dr:13467108 al:0 bm:2369 lo:0 pe:23 ua:226 ap:0
[==========>.........] sync'ed: 53.1% (11606/24733)M
finish: 1:14:16 speed: 2,644 (2,204) K/sec
1: cs:Unconfigured
NOTE: Your diskwrite performance will be limited to the synch speed you see here !! Check your buffer size to increase this up to optimal values! (you can make config changes and then perform: '/etc/init.d/drbd reload')
Check the status periodically until it is completed, which should give output similar to this:
SVN Revision: 1743 build by phil@mescal, 2005-01-31 12:22:07
0: cs:Connected st:Primary/Secondary ld:Consistent
ns:37139 nr:0 dw:0 dr:49035 al:0 bm:6 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured
When the synch is complete, it is time to mount our drbd filesystem on to the previously created "/var/vm" directory as specified for the Virtual Machines during the installation of VMWare Server.
mount -t ext3 /dev/drbd0 /var/vm
This part of the tutorial concludes the volume replication of your servers, that will enable the Virtual Machines to be replicated on to both servers. This allows for data security and makes sure that virtual machines created on 1 server will always be available on both servers. You should now create your Virtual Machines that you want to have in your failover. Please check page 2 of: "How To Install VMware Server On Debian 4.0" for more information on how to do this. You will need the VM's name and configfile name to proceed !
The next part involves configuring the HeartBeat package and making sure that in case of failover, the virtual machines are properly initialized and started on the secondary server.