High-Availability Storage With GlusterFS 3.2.x On Ubuntu 12.04 - Automatic File Replication Across Two Storage Servers - Page 2

3 Setting Up The GlusterFS Client

client1.example.com:

On the client, we can install the GlusterFS client as follows:

apt-get install glusterfs-client

Then we create the following directory:

mkdir /mnt/glusterfs

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with the following command:

mount.glusterfs server1.example.com:/testvol /mnt/glusterfs

(Instead of server1.example.com you can as well use server2.example.com in the above command!)

You should now see the new share in the outputs of...

mount

root@client1:~# mount
/dev/mapper/server3-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/sda1 on /boot type ext2 (rw)
server1.example.com:/testvol on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
root@client1:~#

... and...

df -h

root@client1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server3-root
                       29G  1.1G   27G   4% /
udev                  238M  4.0K  238M   1% /dev
tmpfs                  99M  212K   99M   1% /run
none                  5.0M     0  5.0M   0% /run/lock
none                  247M     0  247M   0% /run/shm
/dev/sda1             228M   24M  193M  11% /boot
server1.example.com:/testvol
                       29G  1.1G   27G   4% /mnt/glusterfs
root@client1:~#

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab

[...]
server1.example.com:/testvol /mnt/glusterfs glusterfs defaults,_netdev 0 0

(Again, instead of server1.example.com you can as well use server2.example.com!)

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of...

df -h

... and...

mount

 

4 Testing

Now let's create some test files on the GlusterFS share:

client1.example.com:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2

Now let's check the /data directory on server1.example.com and server2.example.com. The test1 and test2 files should be present on each node:

server1.example.com/server2.example.com:

ls -l /data

root@server1:~# ls -l /data
total 8
-rw-r--r-- 1 root root 0 2012-05-29 11:17 test1
-rw-r--r-- 1 root root 0 2012-05-29 11:17 test2
root@server1:~#

Now we shut down server1.example.com and add/delete some files on the GlusterFS share on client1.example.com.

server1.example.com:

shutdown -h now

client1.example.com:

touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
rm -f /mnt/glusterfs/test2

The changes should be visible in the /data directory on server2.example.com:

server2.example.com:

ls -l /data

root@server2:~# ls -l /data
total 8
-rw-r--r-- 1 root root 0 2012-05-29 11:17 test1
-rw-r--r-- 1 root root 0 2012-05-29 11:38 test3
-rw-r--r-- 1 root root 0 2012-05-29 11:38 test4
root@server2:~#

Let's boot server1.example.com again and take a look at the /data directory:

server1.example.com:

ls -l /data

root@server1:~# ls -l /data
total 8
-rw-r--r-- 1 root root 0 2012-05-29 11:17 test1
-rw-r--r-- 1 root root 0 2012-05-29 11:17 test2
root@server1:~#

As you see, server1.example.com hasn't noticed the changes that happened while it was down. This is easy to fix, all we need to do is invoke a read command on the GlusterFS share on client1.example.com, e.g.:

client1.example.com:

ls -l /mnt/glusterfs/

root@client1:~# ls -l /mnt/glusterfs/
total 8
-rw-r--r-- 1 root root 0 2012-05-29 11:17 test1
-rw-r--r-- 1 root root 0 2012-05-29 11:38 test3
-rw-r--r-- 1 root root 0 2012-05-29 11:38 test4
root@client1:~#

Now take a look at the /data directory on server1.example.com again, and you should see that the changes have been replicated to that node:

server1.example.com:

ls -l /data

root@server1:~# ls -l /data
total 4
-rw-r--r-- 1 root root 0 2012-05-29 11:17 test1
-rw-r--r-- 1 root root 0 2012-05-29 11:38 test3
-rw-r--r-- 1 root root 0 2012-05-29 11:38 test4
root@server1:~#

 

5 Links

Share this page:

6 Comment(s)

Add comment

Comments

From: David L. Willson at: 2012-12-12 17:21:26

I've used your tutorial twice now. Once at work, to setup a 4 node demo, and again last night, to do a hands-on interactive demo of a 7-node Gluster cluster. (We meant to have 8, but something was wrong with the david node.)

Anyway, I just wanted to say thanks for the simple, useful article.

From: Anonymous at: 2012-06-14 09:34:52

Is it possible to just have two mirror replicating nodes? And then there is no need to install gluster-client.

From: Anonymous at: 2012-08-09 18:15:29

Same question. Please this is possible?

From: at: 2012-08-22 11:38:58

It's possible but recently saw this on the gluster forums  http://community.gluster.org/q/can-i-access-brick-directories-directly/ too bad he didn't give any further explanation as to why its not recommended.

From: psiek at: 2012-10-09 11:29:58

Many thanks for this tutorial, it worked like a charm. I tested with two mirrors client on the same servers, that permitted me to remove a CIFS mount and saved so many time. Thanks again. Kind regards, Philippe

From: Reza at: 2012-11-29 10:18:33

As per questions subject header.. is it possible to do that? Plus, i didnt quite understand about the beginning of the setup, what is the setup for the ubuntu 12.04 needed?