There is a new version of this tutorial available for Ubuntu 22.04 (Jammy Jellyfish).

High-Availability Storage With GlusterFS On Ubuntu 10.04 - Automatic File Replication (Mirror) Across Two Storage Servers - Page 2

3 Setting Up The GlusterFS Client

client1.example.com:

On the client, we can install the GlusterFS client as follows:

aptitude install glusterfs-client glusterfs-server

Then we create the following directory:

mkdir /mnt/glusterfs

Next we create the file /etc/glusterfs/glusterfs.vol (we make a backup of the original /etc/glusterfs/glusterfs.vol file first):

cp /etc/glusterfs/glusterfs.vol /etc/glusterfs/glusterfs.vol_orig
cat /dev/null > /etc/glusterfs/glusterfs.vol
vi /etc/glusterfs/glusterfs.vol
volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host server1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host server2.example.com
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Make sure you use the correct server hostnames or IP addresses in the option remote-host lines!

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs

or

mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs

You should now see the new share in the outputs of...

mount
root@client1:~# mount
/dev/mapper/server3-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
none on /dev type devtmpfs (rw,mode=0755)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /dev/shm type tmpfs (rw,nosuid,nodev)
none on /var/run type tmpfs (rw,nosuid,mode=0755)
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
none on /var/lib/ureadahead/debugfs type debugfs (rw,relatime)
/dev/sda1 on /boot type ext2 (rw)
/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
root@client1:~#

... and...

df -h
root@client1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server3-root
                       29G  852M   26G   4% /
none                  243M  172K  242M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   36K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
none                   29G  852M   26G   4% /var/lib/ureadahead/debugfs
/dev/sda1             228M   17M  199M   8% /boot
/etc/glusterfs/glusterfs.vol
                       18G  848M   16G   5% /mnt/glusterfs
root@client1:~#

(server1.example.com and server2.example.com each have 18GB of space for the GlusterFS filesystem, but because the data is mirrored, the client doesn't see 36GB (2 x 18GB), but only 18GB.)

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab
[...]
/etc/glusterfs/glusterfs.vol  /mnt/glusterfs  glusterfs  defaults  0  0

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of...

df -h

... and...

mount

 

4 Testing

Now let's create some test files on the GlusterFS share:

client1.example.com:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2

Now let's check the /data/export directory on server1.example.com and server2.example.com. The test1 and test2 files should be present on each node:

server1.example.com/server2.example.com:

ls -l /data/export
root@server1:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2010-09-27 16:18 test1
-rw-r--r-- 1 root root 0 2010-09-27 16:18 test2
root@server1:~#

Now we shut down server1.example.com and add/delete some files on the GlusterFS share on client1.example.com.

server1.example.com:

shutdown -h now

client1.example.com:

touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
rm -f /mnt/glusterfs/test2

The changes should be visible in the /data/export directory on server2.example.com:

server2.example.com:

ls -l /data/export
root@server2:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2010-09-27 16:18 test1
-rw-r--r-- 1 root root 0 2010-09-27 16:19 test3
-rw-r--r-- 1 root root 0 2010-09-27 16:19 test4
root@server2:~#

Let's boot server1.example.com again and take a look at the /data/export directory:

server1.example.com:

ls -l /data/export
root@server1:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2010-09-27 16:18 test1
-rw-r--r-- 1 root root 0 2010-09-27 16:18 test2
root@server1:~#

As you see, server1.example.com hasn't noticed the changes that happened while it was down. This is easy to fix, all we need to do is invoke a read command on the GlusterFS share on client1.example.com, e.g.:

client1.example.com:

ls -l /mnt/glusterfs/
root@client1:~# ls -l /mnt/glusterfs/
total 8
-rw-r--r-- 1 root root 0 2010-09-27 16:18 test1
-rw-r--r-- 1 root root 0 2010-09-27 16:19 test3
-rw-r--r-- 1 root root 0 2010-09-27 16:19 test4
root@client1:~#

Now take a look at the /data/export directory on server1.example.com again, and you should see that the changes have been replicated to that node:

server1.example.com:

ls -l /data/export
root@server1:~# ls -l /data/export
total 8
-rw-r--r-- 1 root root 0 2010-09-27 16:18 test1
-rw-r--r-- 1 root root 0 2010-09-27 16:19 test3
-rw-r--r-- 1 root root 0 2010-09-27 16:19 test4
root@server1:~#

 

Share this page:

0 Comment(s)