Setting Up A Highly Available NFS Server - Page 4

8 Install And Configure heartbeat

heartbeat is the control instance of this whole setup. It is going to be installed on server1 and server2, and it monitors the other server. For example, if server1 goes down, heartbeat on server2 detects this and makes server2 take over. heartbeat also starts and stops the NFS server on both server1 and server2. It also provides NFS as a virtual service via the IP address 192.168.0.174 so that the web server cluster nodes see only one NFS server.

First we install heartbeat:

server1/server2:

apt-get install heartbeat

Now we have to create three configuration files for heartbeat. They must be identical on server1 and server2!

server1/server2:

/etc/heartbeat/ha.cf:

logfacility     local0
keepalive 2
#deadtime 30 # USE THIS!!!
deadtime 10
bcast eth0
node server1 server2

As nodenames we must use the output of uname -n on server1 and server2.

server1/server2:

/etc/heartbeat/haresources:

server1  IPaddr::192.168.0.174/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 nfs-kernel-server

The first word is the output of uname -n on server1, no matter if you create the file on server1 or server2! After IPaddr we put our virtual IP address 192.168.0.174, and after drbddisk we use the resource name of our DRBD resource which is r0 here (remember, that is the resource name we use in /etc/drbd.conf - if you use another one, you must use it here, too).

server1/server2:

/etc/heartbeat/authkeys:

auth 3
3 md5 somerandomstring

somerandomstring is a password which the two heartbeat daemons on server1 and server2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.

/etc/heartbeat/authkeys should be readable by root only, therefore we do this:

server1/server2:

chmod 600 /etc/heartbeat/authkeys

Finally we start DRBD and heartbeat on server1 and server2:

server1/server2:

/etc/init.d/drbd start
/etc/init.d/heartbeat start


9 First Tests

Now we can do our first tests. On server1, run

server1:

ifconfig

In the output, the virtual IP address 192.168.0.174 should show up:

eth0      Link encap:Ethernet  HWaddr 00:0C:29:A1:C5:9B
inet addr:192.168.0.172 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fea1:c59b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18992 errors:0 dropped:0 overruns:0 frame:0
TX packets:24816 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2735887 (2.6 MiB) TX bytes:28119087 (26.8 MiB)
Interrupt:177 Base address:0x1400

eth0:0 Link encap:Ethernet HWaddr 00:0C:29:A1:C5:9B
inet addr:192.168.0.174 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:177 Base address:0x1400

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:71 errors:0 dropped:0 overruns:0 frame:0
TX packets:71 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5178 (5.0 KiB) TX bytes:5178 (5.0 KiB)

Also, run

server1:

df -h

on server1. You should see /data listed there now:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5 4.6G 430M 4.0G 10% /
tmpfs 126M 0 126M 0% /dev/shm
/dev/sda1 89M 11M 74M 13% /boot
/dev/drbd0 24G 33M 23G 1% /data

If you do the same

server2:

ifconfig
df -h

on server2, you shouldn't see 192.168.0.174 and /data.

Now we create a test file in /data/export on server1 and then simulate a server failure of server1 (by stopping heartbeat):

server1:

touch /data/export/test1
/etc/init.d/heartbeat stop

If you run ifconfig and df -h on server2 now, you should see the IP address 192.168.0.174 and the /data partition, and

server2:

ls -l /data/export

should list the file test1 which you created on server1 before. So it has been mirrored to server2!

Now we create another test file on server2 and see if it gets mirrored to server1 when it comes up again:

server2:

touch /data/export/test2

server1:

/etc/init.d/heartbeat start

(Wait a few seconds.)

ifconfig
df -h
ls -l /data/export

You should see 192.168.0.174 and /data again on server1 which means it has taken over again (because we defined it as primary), and you should also see the file /data/export/test2!

Share this page:

19 Comment(s)

Add comment

Comments

From: Anonymous at: 2008-11-03 05:57:37
From: Clearjet at: 2009-06-29 15:53:39

The text says:

Also, make sure /dev/sda7 as well as /dev/sda8 are identical in size

But the illustration indicates:

/dev/sda7 -- 150 MB unmounted
/dev/sda8 -- 26 GB unmounted

So which is it?

Thanks

From: Anonymous at: 2009-07-03 11:50:05

Means same size on BOTH server in cluster.

From: gryger at: 2010-10-04 22:13:07

And here: http://docs.homelinux.org another well explained tutorial about DRBD and NFS on Debian.

From: Anonymous at: 2014-03-05 19:18:07

This is somewhat of the set-up that I have been looking for, however when joining this “Highly Available NFS Server or a Balanced MySQL Cluster” with a “Loadbalanced High-Availability Web Server Apache Cluster”, my concerns are the IP's...

The tutorial for both “Loadbalanced High-Availability MySQL Cluster and Loadbalanced High-Availability Web Server Apache Cluster” utilize the same IP addresses…

Within this tutorial it’s mentioned “Virtual IP address that represents the NFS cluster to the outside and also a NFS client IP address...”

I am looking to join two of the clusters to make a highly available stable web hosting cluster with utilizing either NFS or MySQL for the back-end…

Which IP’s should be used for each node?

From: Anonymous at: 2006-07-13 21:56:56

this may be pretty obvious, but when you install the kernel-headers  package, make sure you're using the version which match your running kernel (for example kernel-headers-2.6.8-2-686-smp ).

 

From: at: 2007-01-11 05:23:25

Yes, this is VERY IMPORTANT.

 When installing your kernel headers, simply do this:

apt-get install kernel-headers-`uname -r` drbd0.7-module-source drbd0.7-utils

 

The `uname -r` will automatically insert your proper kernel version into the command.

Try running the command uname -r once, by itself, to see.

From: Jason Priebe at: 2009-04-08 01:17:40

We considered the DRBD approach as well when we looked to replace our NetApp cluster with a linux-based solution.  We settled on a slightly different approach (using RHEL and Cluster Suite). I welcome you to read my blog post about it.

From: Anonymous at: 2006-03-26 22:06:50

with recent drbd utils (0.7.17), I had to do

drbdsetup /dev/drbd0 primary --do-what-I-say

From: Anonymous at: 2009-08-04 12:48:43

On version 8.0.14, I have to do :

drbdsetup /dev/drbd0 primary -o

From: Anonymous at: 2009-11-07 15:17:35

Using drbd8-ultils you should use:

 drbdadm -- --overwrite-data-of-peer  primary all

From: Anonymous at: 2009-11-07 15:18:27

Using drbd8-ultils you should use:

 drbdadm -- --overwrite-data-of-peer  primary all

From: Anonymous at: 2006-03-26 22:09:26

Also before doing mount -t ext3 /dev/drbd0 /data, you should of course create filesystem there

mkfs.ext3 /dev/drbd0

I suggest making XFS filesystem.

From: starzinger at: 2006-03-10 10:24:13

To enable automatic failback from server2 to server1, you need to put in the following:

/etc/heartbeat/ha.cf:

auto_failback on

From: Anonymous at: 2006-03-07 16:30:56

If I want the data to be available to the NFS machines themselves, do you recommend mounting the virtual IP on them?

From: at: 2011-01-19 12:58:53

I think that is the only way,

why we are trying to use nfs then?

In drdb just one node is active, you cant do changes on the pasive one.

 

From: Anonymous at: 2006-03-09 18:46:27

Thanks for your info, pretty interesting.

Just two questions:

- Why ext3 is your choice instead of reiserfs?

- Why are you using ip-alias instead of iproute2?

Thanks in advance.

From: Anonymous at: 2006-03-13 10:48:37

I've thought about doing this before, but using iSCSI and the built in /dev/md aka software RAID to link the devices together as a mirrored device. Since iSCSI is supposedly a more open standard and can be used with multiple operating systems it'll be easier to implement on non-Linux systems as well.

From: arkarwmh at: 2015-02-09 10:46:19

But how do i "Virtual Ip" over the 2 servers?