Setting Up A Highly Available NFS Server - Page 4
8 Install And Configure heartbeat
heartbeat is the control instance of this whole setup. It is going to be installed on server1 and server2, and it monitors the other server. For example, if server1 goes down, heartbeat on server2 detects this and makes server2 take over. heartbeat also starts and stops the NFS server on both server1 and server2. It also provides NFS as a virtual service via the IP address 192.168.0.174 so that the web server cluster nodes see only one NFS server.
First we install heartbeat:
server1/server2:
apt-get install heartbeat
Now we have to create three configuration files for heartbeat. They must be identical on server1 and server2!
server1/server2:
/etc/heartbeat/ha.cf:
logfacility local0 |
As nodenames we must use the output of uname -n on server1 and server2.
server1/server2:
/etc/heartbeat/haresources:
server1 IPaddr::192.168.0.174/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 nfs-kernel-server |
The first word is the output of uname -n on server1, no matter if you create the file on server1 or server2! After IPaddr we put our virtual IP address 192.168.0.174, and after drbddisk we use the resource name of our DRBD resource which is r0 here (remember, that is the resource name we use in /etc/drbd.conf - if you use another one, you must use it here, too).
server1/server2:
/etc/heartbeat/authkeys:
auth 3 |
somerandomstring is a password which the two heartbeat daemons on server1 and server2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.
/etc/heartbeat/authkeys should be readable by root only, therefore we do this:
server1/server2:
chmod 600 /etc/heartbeat/authkeys
Finally we start DRBD and heartbeat on server1 and server2:
server1/server2:
/etc/init.d/drbd start
/etc/init.d/heartbeat start
9 First Tests
Now we can do our first tests. On server1, run
server1:
ifconfig
In the output, the virtual IP address 192.168.0.174 should show up:
eth0 Link encap:Ethernet HWaddr 00:0C:29:A1:C5:9B |
Also, run
server1:
df -h
on server1. You should see /data listed there now:
Filesystem Size Used Avail Use% Mounted on |
If you do the same
server2:
ifconfig
df -h
on server2, you shouldn't see 192.168.0.174 and /data.
Now we create a test file in /data/export on server1 and then simulate a server failure of server1 (by stopping heartbeat):
server1:
touch /data/export/test1
/etc/init.d/heartbeat stop
If you run ifconfig and df -h on server2 now, you should see the IP address 192.168.0.174 and the /data partition, and
server2:
ls -l /data/export
should list the file test1 which you created on server1 before. So it has been mirrored to server2!
Now we create another test file on server2 and see if it gets mirrored to server1 when it comes up again:
server2:
touch /data/export/test2
server1:
/etc/init.d/heartbeat start
(Wait a few seconds.)
ifconfig
df -h
ls -l /data/export
You should see 192.168.0.174 and /data again on server1 which means it has taken over again (because we defined it as primary), and you should also see the file /data/export/test2!