Setting Up A Highly Available NFS Server - Page 3

6 Configure DRBD

Now we load the DRBD kernel module on both server1 and server2. We need to do this only now because afterwards it will be loaded by the DRBD init script.


modprobe drbd

Let's configure DRBD:


drbdadm up all
cat /proc/drbd

The last command should show something like this (on both server1 and server2):

version: 0.7.10 (api:77/proto:74)
SVN Revision: 1743 build by phil@mescal, 2005-01-31 12:22:07
0: cs:Connected st:Secondary/Secondary ld:Inconsistent
ns:0 nr:0 dw:0 dr:0 al:0 bm:1548 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

You see that both NFS servers say that they are secondary and that the data is inconsistant. This is because no initial sync has been made yet.

I want to make server1 the primary NFS server and server2 the "hot-standby", If server1 fails, server2 takes over, and if server1 comes back then all data that has changed in the meantime is mirrored back from server2 to server1 so that data is always consistent.

This next step has to be done only on server1!


drbdadm -- --do-what-I-say primary all

Now we start the initial sync between server1 and server2 so that the data on both servers becomes consistent. On server1, we do this:


drbdadm -- connect all

The initial sync is going to take a few hours (depending on the size of /dev/sda8 (/dev/hda8...)) so please be patient.

You can see the progress of the initial sync like this on server1 or server2:


cat /proc/drbd

The output should look like this:

version: 0.7.10 (api:77/proto:74)
SVN Revision: 1743 build by phil@mescal, 2005-01-31 12:22:07
0: cs:SyncSource st:Primary/Secondary ld:Consistent
ns:13441632 nr:0 dw:0 dr:13467108 al:0 bm:2369 lo:0 pe:23 ua:226 ap:0
[==========>.........] sync'ed: 53.1% (11606/24733)M
finish: 1:14:16 speed: 2,644 (2,204) K/sec
1: cs:Unconfigured

When the initial sync is finished, the output should look like this:

SVN Revision: 1743 build by phil@mescal, 2005-01-31 12:22:07
0: cs:Connected st:Primary/Secondary ld:Consistent
ns:37139 nr:0 dw:0 dr:49035 al:0 bm:6 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

7 Some Further NFS Configuration

NFS stores some important information (e.g. information about file locks, etc.) in /var/lib/nfs. Now what happens if server1 goes down? server2 takes over, but its information in /var/lib/nfs will be different from the information in server1's /var/lib/nfs directory. Therefore we do some tweaking so that these details will be stored on our /data partition (/dev/sda8 or /dev/hda8...) which is mirrored by DRBD between server1 and server2. So if server1 goes down server2 can use the NFS details of server1.


mkdir /data


mount -t ext3 /dev/drbd0 /data
mv /var/lib/nfs/ /data/
ln -s /data/nfs/ /var/lib/nfs
mkdir /data/export
umount /data


rm -fr /var/lib/nfs/
ln -s /data/nfs/ /var/lib/nfs

Share this page:

19 Comment(s)

Add comment


From: Anonymous at: 2008-11-03 05:57:37
From: Clearjet at: 2009-06-29 15:53:39

The text says:

Also, make sure /dev/sda7 as well as /dev/sda8 are identical in size

But the illustration indicates:

/dev/sda7 -- 150 MB unmounted
/dev/sda8 -- 26 GB unmounted

So which is it?


From: Anonymous at: 2009-07-03 11:50:05

Means same size on BOTH server in cluster.

From: gryger at: 2010-10-04 22:13:07

And here: another well explained tutorial about DRBD and NFS on Debian.

From: Anonymous at: 2014-03-05 19:18:07

This is somewhat of the set-up that I have been looking for, however when joining this “Highly Available NFS Server or a Balanced MySQL Cluster” with a “Loadbalanced High-Availability Web Server Apache Cluster”, my concerns are the IP's...

The tutorial for both “Loadbalanced High-Availability MySQL Cluster and Loadbalanced High-Availability Web Server Apache Cluster” utilize the same IP addresses…

Within this tutorial it’s mentioned “Virtual IP address that represents the NFS cluster to the outside and also a NFS client IP address...”

I am looking to join two of the clusters to make a highly available stable web hosting cluster with utilizing either NFS or MySQL for the back-end…

Which IP’s should be used for each node?

From: Anonymous at: 2006-07-13 21:56:56

this may be pretty obvious, but when you install the kernel-headers  package, make sure you're using the version which match your running kernel (for example kernel-headers-2.6.8-2-686-smp ).


From: at: 2007-01-11 05:23:25

Yes, this is VERY IMPORTANT.

 When installing your kernel headers, simply do this:

apt-get install kernel-headers-`uname -r` drbd0.7-module-source drbd0.7-utils


The `uname -r` will automatically insert your proper kernel version into the command.

Try running the command uname -r once, by itself, to see.

From: Jason Priebe at: 2009-04-08 01:17:40

We considered the DRBD approach as well when we looked to replace our NetApp cluster with a linux-based solution.  We settled on a slightly different approach (using RHEL and Cluster Suite). I welcome you to read my blog post about it.

From: Anonymous at: 2006-03-26 22:06:50

with recent drbd utils (0.7.17), I had to do

drbdsetup /dev/drbd0 primary --do-what-I-say

From: Anonymous at: 2009-08-04 12:48:43

On version 8.0.14, I have to do :

drbdsetup /dev/drbd0 primary -o

From: Anonymous at: 2009-11-07 15:17:35

Using drbd8-ultils you should use:

 drbdadm -- --overwrite-data-of-peer  primary all

From: Anonymous at: 2009-11-07 15:18:27

Using drbd8-ultils you should use:

 drbdadm -- --overwrite-data-of-peer  primary all

From: Anonymous at: 2006-03-26 22:09:26

Also before doing mount -t ext3 /dev/drbd0 /data, you should of course create filesystem there

mkfs.ext3 /dev/drbd0

I suggest making XFS filesystem.

From: starzinger at: 2006-03-10 10:24:13

To enable automatic failback from server2 to server1, you need to put in the following:


auto_failback on

From: Anonymous at: 2006-03-07 16:30:56

If I want the data to be available to the NFS machines themselves, do you recommend mounting the virtual IP on them?

From: at: 2011-01-19 12:58:53

I think that is the only way,

why we are trying to use nfs then?

In drdb just one node is active, you cant do changes on the pasive one.


From: Anonymous at: 2006-03-09 18:46:27

Thanks for your info, pretty interesting.

Just two questions:

- Why ext3 is your choice instead of reiserfs?

- Why are you using ip-alias instead of iproute2?

Thanks in advance.

From: Anonymous at: 2006-03-13 10:48:37

I've thought about doing this before, but using iSCSI and the built in /dev/md aka software RAID to link the devices together as a mirrored device. Since iSCSI is supposedly a more open standard and can be used with multiple operating systems it'll be easier to implement on non-Linux systems as well.

From: arkarwmh at: 2015-02-09 10:46:19

But how do i "Virtual Ip" over the 2 servers?