High-Availability Storage With GlusterFS On Ubuntu 9.10 - Automatic File Replication (Mirror) Across Two Storage Servers -Page 2

Want to support HowtoForge? Become a subscriber!
 
Submitted by falko (Contact Author) (Forums) on Thu, 2010-01-07 17:05. ::

3 Setting Up The GlusterFS Client

client1.example.com:

On the client, we can install the GlusterFS client as follows:

aptitude install glusterfs-client glusterfs-server

Then we create the following directory:

mkdir /mnt/glusterfs

Next we create the file /etc/glusterfs/glusterfs.vol (we make a backup of the original /etc/glusterfs/glusterfs.vol file first):

cp /etc/glusterfs/glusterfs.vol /etc/glusterfs/glusterfs.vol_orig
cat /dev/null > /etc/glusterfs/glusterfs.vol
vi /etc/glusterfs/glusterfs.vol

volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host server1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host server2.example.com
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Make sure you use the correct server hostnames or IP addresses in the option remote-host lines!

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs

or

mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs

You should now see the new share in the outputs of...

mount

root@client1:~# mount
/dev/mapper/client1-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /dev/shm type tmpfs (rw,nosuid,nodev)
none on /var/run type tmpfs (rw,nosuid,mode=0755)
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
/dev/sda5 on /boot type ext2 (rw)
/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse.glusterfs (rw,max_read=131072,allow_other,default_permissions)
root@client1:~#

... and...

df -h

root@client1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/client1-root
                       29G  808M   27G   3% /
udev                  122M  152K  121M   1% /dev
none                  122M     0  122M   0% /dev/shm
none                  122M   36K  122M   1% /var/run
none                  122M     0  122M   0% /var/lock
none                  122M     0  122M   0% /lib/init/rw
/dev/sda5             228M   15M  202M   7% /boot
/etc/glusterfs/glusterfs.vol
                       18G  805M   16G   5% /mnt/glusterfs
root@client1:~#

(server1.example.com and server2.example.com each have 18GB of space for the GlusterFS filesystem, but because the data is mirrored, the client doesn't see 36GB (2 x 18GB), but only 18GB.)

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab

[...]
/etc/glusterfs/glusterfs.vol  /mnt/glusterfs  glusterfs  defaults  0  0

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of...

df -h

... and...

mount

 

4 Testing

Now let's create some test files on the GlusterFS share:

client1.example.com:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2

Now let's check the /data/export directory on server1.example.com and server2.example.com. The test1 and test2 files should be present on each node:

server1.example.com/server2.example.com:

ls -l /data/export

root@server1:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-12-18 15:37 test1
-rw-r--r-- 1 root root 0 2009-12-18 15:37 test2
root@server1:~#

Now we shut down server1.example.com and add/delete some files on the GlusterFS share on client1.example.com.

server1.example.com:

shutdown -h now

client1.example.com:

touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
rm -f /mnt/glusterfs/test2

The changes should be visible in the /data/export directory on server2.example.com:

server2.example.com:

ls -l /data/export

root@server2:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-12-18 15:37 test1
-rw-r--r-- 1 root root 0 2009-12-18 15:39 test3
-rw-r--r-- 1 root root 0 2009-12-18 15:39 test4
root@server2:~#

Let's boot server1.example.com again and take a look at the /data/export directory:

server1.example.com:

ls -l /data/export

root@server1:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-12-18 15:37 test1
-rw-r--r-- 1 root root 0 2009-12-18 15:37 test2
root@server1:~#

As you see, server1.example.com hasn't noticed the changes that happened while it was down. This is easy to fix, all we need to do is invoke a read command on the GlusterFS share on client1.example.com, e.g.:

client1.example.com:

ls -l /mnt/glusterfs/

root@client1:~# ls -l /mnt/glusterfs/
total 0
-rw-r--r-- 1 root root 0 2009-12-18 15:37 test1
-rw-r--r-- 1 root root 0 2009-12-18 15:39 test3
-rw-r--r-- 1 root root 0 2009-12-18 15:39 test4
root@client1:~#

Now take a look at the /data/export directory on server1.example.com again, and you should see that the changes have been replicated to that node:

server1.example.com:

ls -l /data/export

root@server1:~# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-12-18 15:37 test1
-rw-r--r-- 1 root root 0 2009-12-18 15:39 test3
-rw-r--r-- 1 root root 0 2009-12-18 15:39 test4
root@server1:~#

 

5 Links


Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by tpo (registered user) on Wed, 2012-05-02 14:53.
The comment about using hostnames instead of IP addresses should be expanded to note that only FQDN hostnames are accepted
Submitted by tpo (registered user) on Wed, 2012-05-02 14:45.

Working through the howto I'm getting on a more recent glusterfs installation (3.0.5):

writebehind: option 'window-size' is deprecated, preferred
                  is 'cache-size', continuing with correction

That should be fixed in this howto.

Since this installation is for Ubuntu 9.x, which is farely old I'm wondering how I'go ubout updating this howto...?

(I'm also wondering why the above preformatted text is being rendered in such a huge scrollbox...)

Submitted by Piavlo (not registered) on Thu, 2010-01-07 23:37.

In this guide you setup just one clinet that access the glusterfs storage.

The big question is if I can setup several clients that will have concurrent/parallel read/write access to the glusterfs storage?