There is a new revision of this tutorial available for Ubuntu 12.10 (Quantal Quetzal).

Striping Across Four Storage Nodes With GlusterFS 3.2.x On Ubuntu 12.04 - Page 2

3 Setting Up The GlusterFS Client

On the client, we can install the GlusterFS client as follows:

apt-get install glusterfs-client

Then we create the following directory:

mkdir /mnt/glusterfs

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with the following command:

mount.glusterfs /mnt/glusterfs

(Instead of you can as well use or or in the above command!)

You should now see the new share in the outputs of...


root@client1:~# mount
/dev/mapper/server5-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/sda1 on /boot type ext2 (rw) on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)

... and...

df -h

root@client1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
                       29G  1.1G   27G   4% /
udev                  238M  4.0K  238M   1% /dev
tmpfs                  99M  212K   99M   1% /run
none                  5.0M     0  5.0M   0% /run/lock
none                  247M     0  247M   0% /run/shm
/dev/sda1             228M   24M  193M  11% /boot
                      116G  4.2G  106G   4% /mnt/glusterfs

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab

[...] /mnt/glusterfs glusterfs defaults,_netdev 0 0

(Again, instead of you can as well use or or!)

To test if your modified /etc/fstab is working, reboot the client:


After the reboot, you should find the share in the outputs of...

df -h

... and...



4 Testing

Now let's create a big test file on the GlusterFS share:

dd if=/dev/zero of=/mnt/glusterfs/test.img bs=1024k count=1000

ls -l /mnt/glusterfs

root@client1:~# ls -l /mnt/glusterfs
total 1024032
-rw-r--r-- 1 root root 1048576000 2012-05-29 17:31 test.img

Now let's check the /data directory on,,, and You should see the test.img file on each node, but with different sizes (due to data striping):

ls -l /data

root@server1:~# ls -l /data
total 256008
-rw-r--r-- 1 root root 1045430272 2012-05-29 17:31 test.img

ls -l /data

root@server2:~# ls -l /data
total 256008
-rw-r--r-- 1 root root 1046478848 2012-05-29 17:27 test.img

ls -l /data

root@server3:~# ls -l /data
total 256008
-rw-r--r-- 1 root root 1047527424 2012-05-29 17:26 test.img

ls -l /data

root@server4:~# ls -l /data
total 256008
-rw-r--r-- 1 root root 1048576000 2012-05-29 17:30 test.img


Share this page:

0 Comment(s)

Add comment