There is a new version of this tutorial available for Debian 12 (Bookworm).

High-Availability Storage With GlusterFS On Debian Lenny - Automatic File Replication Across Two Storage Servers - Page 2

3 Setting Up The GlusterFS Client

client1.example.com:

On the client, we need to install fuse and GlusterFS. Instead of installing the libfuse2 package from the Debian repository, we install a patched version with better support for GlusterFS.

First we install the prerequisites again:

aptitude install sshfs build-essential flex bison byacc libdb4.6 libdb4.6-dev

Then we build fuse as follows (you can find the latest patched fuse version on ftp://ftp.zresearch.com/pub/gluster/glusterfs/fuse/):

cd /tmp
wget ftp://ftp.zresearch.com/pub/gluster/glusterfs/fuse/fuse-2.7.4glfs11.tar.gz
tar -zxvf fuse-2.7.4glfs11.tar.gz
cd fuse-2.7.4glfs11
./configure
make && make install

Afterwards we build GlusterFS (just like on the server)...

cd /tmp
wget http://ftp.gluster.com/pub/gluster/glusterfs/2.0/LATEST/glusterfs-2.0.1.tar.gz
tar xvfz glusterfs-2.0.1.tar.gz
cd glusterfs-2.0.1
./configure --prefix=/usr > /dev/null
make && make install
ldconfig
glusterfs --version

... and create the following two directories:

mkdir /mnt/glusterfs
mkdir /etc/glusterfs

Next we create the file /etc/glusterfs/glusterfs.vol:

vi /etc/glusterfs/glusterfs.vol
volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host server1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host server2.example.com
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Make sure you use the correct server hostnames or IP addresses in the option remote-host lines!

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs

or

mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs

You should now see the new share in the outputs of...

mount
client1:/tmp/glusterfs-2.0.1# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse.glusterfs (rw,max_read=131072,allow_other,default_permissions)
client1:/tmp/glusterfs-2.0.1#

... and...

df -h
client1:/tmp/glusterfs-2.0.1# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              29G  935M   27G   4% /
tmpfs                 126M     0  126M   0% /lib/init/rw
udev                   10M   80K   10M   1% /dev
tmpfs                 126M     0  126M   0% /dev/shm
/etc/glusterfs/glusterfs.vol
                       19G  804M   17G   5% /mnt/glusterfs
client1:/tmp/glusterfs-2.0.1#

(server1.example.com and server2.example.com each have 19GB of space for the GlusterFS filesystem, but because the data is mirrored, the client doesn't see 38GB (2 x 19GB), but only 19GB.)

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab
[...]
/etc/glusterfs/glusterfs.vol  /mnt/glusterfs  glusterfs  defaults  0  0

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of...

df -h

... and...

mount

 

4 Testing

Now let's create some test files on the GlusterFS share:

client1.example.com:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2

Now let's check the /data/export directory on server1.example.com and server2.example.com. The test1 and test2 files should be present on each node:

server1.example.com/server2.example.com:

ls -l /data/export
server1:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-06-02 15:31 test1
-rw-r--r-- 1 root root 0 2009-06-02 15:32 test2
server1:~#

Now we shut down server1.example.com and add/delete some files on the GlusterFS share on client1.example.com.

server1.example.com:

shutdown -h now

client1.example.com:

touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
rm -f /mnt/glusterfs/test2

The changes should be visible in the /data/export directory on server2.example.com:

server2.example.com:

ls -l /data/export
server2:/tmp/glusterfs-2.0.1# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-06-02 15:31 test1
-rw-r--r-- 1 root root 0 2009-06-02 15:32 test3
-rw-r--r-- 1 root root 0 2009-06-02 15:33 test4
server2:/tmp/glusterfs-2.0.1#

Let's boot server1.example.com again and take a look at the /data/export directory:

server1.example.com:

ls -l /data/export
server1:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-06-02 15:31 test1
-rw-r--r-- 1 root root 0 2009-06-02 15:32 test2
server1:~#

As you see, server1.example.com hasn't noticed the changes that happened while it was down. This is easy to fix, all we need to do is invoke a read command on the GlusterFS share on client1.example.com, e.g.:

client1.example.com:

ls -l /mnt/glusterfs/
client1:~# ls -l /mnt/glusterfs/
total 0
-rw-r--r-- 1 root root 0 2009-06-02 15:31 test1
-rw-r--r-- 1 root root 0 2009-06-02 15:32 test3
-rw-r--r-- 1 root root 0 2009-06-02 15:33 test4
client1:~#

Now take a look at the /data/export directory on server1.example.com again, and you should see that the changes have been replicated to that node:

server1.example.com:

ls -l /data/export
server1:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-06-02 15:31 test1
-rw-r--r-- 1 root root 0 2009-06-02 15:52 test3
-rw-r--r-- 1 root root 0 2009-06-02 15:52 test4
server1:~#

 

Share this page:

2 Comment(s)