There is a new version of this tutorial available for CentOS 7.2.

High-Availability Storage With GlusterFS On CentOS 5.4 - Automatic File Replication (Mirror) Across Two Storage Servers - Page 2

This tutorial exists for these OS versions

On this page

  1. 3 Setting Up The GlusterFS Client
  2. 4 Testing
  3. 5 Links

3 Setting Up The GlusterFS Client

client1.example.com:

GlusterFS isn't available as a package for CentOS 5.4, therefore we have to build it ourselves. First we install the prerequisites:

yum groupinstall 'Development Tools'

yum groupinstall 'Development Libraries'

yum install libibverbs-devel fuse-devel

Then we load the fuse kernel module...

modprobe fuse

... and create the file /etc/rc.modules with the following contents so that the fuse kernel module will be loaded automatically whenever the system boots:

vi /etc/rc.modules

modprobe fuse

Make the file executable:

chmod +x /etc/rc.modules

Then we download the GlusterFS 2.0.9 sources (please note that this is the same version that is installed on the server!) and build GlusterFS as follows:

cd /tmp
wget http://ftp.gluster.com/pub/gluster/glusterfs/2.0/LATEST/glusterfs-2.0.9.tar.gz
tar xvfz glusterfs-2.0.9.tar.gz
cd glusterfs-2.0.9
./configure

At the end of the ./configure command, you should see something like this:

[...]
GlusterFS configure summary
===========================
FUSE client        : yes
Infiniband verbs   : yes
epoll IO multiplex : yes
Berkeley-DB        : yes
libglusterfsclient : yes
argp-standalone    : no

make && make install
ldconfig

Check the GlusterFS version afterwards (should be 2.0.9):

glusterfs --version

[[email protected] glusterfs-2.0.9]# glusterfs --version
glusterfs 2.0.9 built on Mar 1 2010 15:58:06
Repository revision: v2.0.9
Copyright (c) 2006-2009 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[[email protected] glusterfs-2.0.9]#

Then we create the following two directories:

mkdir /mnt/glusterfs
mkdir /etc/glusterfs

Next we create the file /etc/glusterfs/glusterfs.vol:

vi /etc/glusterfs/glusterfs.vol

volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host server1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host server2.example.com
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Make sure you use the correct server hostnames or IP addresses in the option remote-host lines!

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs

or

mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs

You should now see the new share in the outputs of...

mount

[[email protected] ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
glusterfs#/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse (rw,allow_other,default_permissions,max_read=131072)
[[email protected] ~]#

... and...

df -h

[[email protected] ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       29G  2.2G   25G   9% /
/dev/sda1              99M   13M   82M  14% /boot
tmpfs                 187M     0  187M   0% /dev/shm
glusterfs#/etc/glusterfs/glusterfs.vol
                       28G  2.3G   25G   9% /mnt/glusterfs
[[email protected] ~]#

(server1.example.com and server2.example.com each have 28GB of space for the GlusterFS filesystem, but because the data is mirrored, the client doesn't see 56GB (2 x 28GB), but only 28GB.)

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab

[...]
/etc/glusterfs/glusterfs.vol  /mnt/glusterfs  glusterfs  defaults  0  0

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of...

df -h

... and...

mount

 

4 Testing

Now let's create some test files on the GlusterFS share:

client1.example.com:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2

Now let's check the /data/export directory on server1.example.com and server2.example.com. The test1 and test2 files should be present on each node:

server1.example.com/server2.example.com:

ls -l /data/export

[[email protected] ~]# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test2
[[email protected] ~]#

Now we shut down server1.example.com and add/delete some files on the GlusterFS share on client1.example.com.

server1.example.com:

shutdown -h now

client1.example.com:

touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
rm -f /mnt/glusterfs/test2

The changes should be visible in the /data/export directory on server2.example.com:

server2.example.com:

ls -l /data/export

[[email protected] ~]# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test3
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test4
[[email protected] ~]#

Let's boot server1.example.com again and take a look at the /data/export directory:

server1.example.com:

ls -l /data/export

[[email protected] ~]# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test2
[[email protected] ~]#

As you see, server1.example.com hasn't noticed the changes that happened while it was down. This is easy to fix, all we need to do is invoke a read command on the GlusterFS share on client1.example.com, e.g.:

client1.example.com:

ls -l /mnt/glusterfs/

[[email protected] ~]# ls -l /mnt/glusterfs/
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test3
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test4
[[email protected] ~]#

Now take a look at the /data/export directory on server1.example.com again, and you should see that the changes have been replicated to that node:

server1.example.com:

ls -l /data/export

[[email protected] ~]# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test3
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test4
[[email protected] ~]#

 

Falko Timme

About Falko Timme

Falko Timme is an experienced Linux administrator and founder of Timme Hosting, a leading nginx business hosting company in Germany. He is one of the most active authors on HowtoForge since 2005 and one of the core developers of ISPConfig since 2000. He has also contributed to the O'Reilly book "Linux System Administration".

Share this page:

Suggested articles

3 Comment(s)

Add comment

Comments

By: Mudgen

GlusterFS packages for EL5 are available in the Fedora Project's Extra Packages for Enterprise Linux 5 (epel) repository.

 http://fedoraproject.org/wiki/EPEL/FAQ

By:

Hi,

I had problems with "transport endpoint not connected" when trying to use yum to install gluster but when I followed this howto to the letter it worked perfectly.

Having said that, it would be nice to have a newer working version I see the version on the gluster website (rpm's) is at 3.x.

C

By: Steve

 What if you shutdown server2 before starting back server1? How will it be able to know that directories changed?

 No magic I think: there is no way you read updated status. Maximum guarantee would be an error answer until servers align back... isn't it?