Setting Up A Highly Available NFS Server - Page 2

On this page

4 Install NFS Server

Next we install the NFS server on both server1 and server2:

server1/server2:

apt-get install nfs-kernel-server

Then we remove the system bootup links for NFS because NFS will be started and controlled by heartbeat in our setup:

server1/server2:

update-rc.d -f nfs-kernel-server remove
update-rc.d -f nfs-common remove

We want to export the directory /data/export (i.e., this will be our NFS share that our web server cluster nodes will use to serve web content), so we edit /etc/exports on server1 and server2. It should contain only the following line:

server1/server2:

/etc/exports:

/data/export/ 192.168.0.0/255.255.255.0(rw,no_root_squash,no_all_squash,sync)

This means that /data/export will be accessible by all systems from the 192.168.0.x subnet. You can limit access to a single system by using 192.168.0.100/255.255.255.255 instead of 192.168.0.0/255.255.255.0, for example. See

man 5 exports

to learn more about this.

Later in this tutorial we will create /data/exports on our empty (and still unmounted!) partition /dev/sda8.


5 Install DRBD

Next we install DRBD on both server1 and server2:

server1/server2:

apt-get install kernel-headers-2.6.8-2-386 drbd0.7-module-source drbd0.7-utils
cd /usr/src/
tar xvfz drbd0.7.tar.gz
cd modules/drbd/drbd
make
make install

Then edit /etc/drbd.conf on server1 and server2. It must be identical on both systems and looks like this:

server1/server2:

/etc/drbd.conf:

resource r0 {

protocol C;
incon-degr-cmd "halt -f";

startup {
degr-wfc-timeout 120; # 2 minutes.
}

disk {
on-io-error detach;
}

net {

}

syncer {

rate 10M;

group 1;

al-extents 257;
}

on server1 { # ** EDIT ** the hostname of server 1 (uname -n)
device /dev/drbd0; #
disk /dev/sda8; # ** EDIT ** data partition on server 1
address 192.168.0.172:7788; # ** EDIT ** IP address on server 1
meta-disk /dev/sda7[0]; # ** EDIT ** 128MB partition for DRBD on server 1
}

on server2 { # ** EDIT ** the hostname of server 2 (uname -n)
device /dev/drbd0; #
disk /dev/sda8; # ** EDIT ** data partition on server 2
address 192.168.0.173:7788; # ** EDIT ** IP address on server 2
meta-disk /dev/sda7[0]; # ** EDIT ** 128MB partition for DRBD on server 2
}

}

As resource name you can use whatever you like. Here it's r0. Please make sure you put the correct hostnames of server1 and server2 into /etc/drbd.conf. DRBD expects the hostnames as they are shown by the command

uname -n

If you have set server1 and server2 respectively as hostnames during the basic Debian installation, then the output of uname -n should be server1 and server2.

Also make sure you replace the IP addresses and the disks appropriately. If you use /dev/hda instead of /dev/sda, please put /dev/hda8 instead of /dev/sda8 into /etc/drbd.conf (the same goes for the meta-disk where DRBD stores its meta data). /dev/sda8 (or /dev/hda8...) will be used as our NFS share later on.

Falko Timme

About Falko Timme

Falko Timme is an experienced Linux administrator and founder of Timme Hosting, a leading nginx business hosting company in Germany. He is one of the most active authors on HowtoForge since 2005 and one of the core developers of ISPConfig since 2000. He has also contributed to the O'Reilly book "Linux System Administration".

Share this page:

Suggested articles

3 Comment(s)

Add comment

Comments

By: Anonymous

this may be pretty obvious, but when you install the kernel-headers  package, make sure you're using the version which match your running kernel (for example kernel-headers-2.6.8-2-686-smp ).

 

By:

Yes, this is VERY IMPORTANT.

 When installing your kernel headers, simply do this:

apt-get install kernel-headers-`uname -r` drbd0.7-module-source drbd0.7-utils

 

The `uname -r` will automatically insert your proper kernel version into the command.

Try running the command uname -r once, by itself, to see.

By: Jason Priebe

We considered the DRBD approach as well when we looked to replace our NetApp cluster with a linux-based solution.  We settled on a slightly different approach (using RHEL and Cluster Suite). I welcome you to read my blog post about it.