Striping Across Four Storage Nodes With GlusterFS 3.2.x On Ubuntu 12.10 - Page 2
This tutorial exists for these OS versions
- Ubuntu 12.10 (Quantal Quetzal)
- Ubuntu 12.04 LTS (Precise Pangolin)
- Ubuntu 11.10 (Oneiric Ocelot)
- Ubuntu 9.10 (Karmic Koala)
On this page
3 Setting Up The GlusterFS Client
client1.example.com:
On the client, we can install the GlusterFS client as follows:
apt-get install glusterfs-client
Then we create the following directory:
mkdir /mnt/glusterfs
That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with the following command:
mount.glusterfs server1.example.com:/testvol /mnt/glusterfs
(Instead of server1.example.com you can as well use server2.example.com or server3.example.com or server4.example.com in the above command!)
You should now see the new share in the outputs of...
mount
[email protected]:~# mount
/dev/mapper/server5-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/sda1 on /boot type ext2 (rw)
server1.example.com:/testvol on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
[email protected]:~#
... and...
df -h
[email protected]:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/server5-root
29G 1.1G 27G 4% /
udev 238M 4.0K 238M 1% /dev
tmpfs 99M 212K 99M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 247M 0 247M 0% /run/shm
/dev/sda1 228M 24M 193M 11% /boot
server1.example.com:/testvol
116G 4.2G 106G 4% /mnt/glusterfs
[email protected]:~#
Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.
Open /etc/fstab and append the following line:
vi /etc/fstab
[...] server1.example.com:/testvol /mnt/glusterfs glusterfs defaults,_netdev 0 0 |
(Again, instead of server1.example.com you can as well use server2.example.com or server3.example.com or server4.example.com!)
To test if your modified /etc/fstab is working, reboot the client:
reboot
After the reboot, you should find the share in the outputs of...
df -h
... and...
mount
4 Testing
Now let's create a big test file on the GlusterFS share:
client1.example.com:
dd if=/dev/zero of=/mnt/glusterfs/test.img bs=1024k count=1000
ls -l /mnt/glusterfs
[email protected]:~# ls -l /mnt/glusterfs
total 1024032
-rw-r--r-- 1 root root 1048576000 2012-12-17 17:31 test.img
[email protected]:~#
Now let's check the /data directory on server1.example.com, server2.example.com, server3.example.com, and server4.example.com. You should see the test.img file on each node, but with different sizes (due to data striping):
server1.example.com:
ls -l /data
[email protected]:~# ls -l /data
total 256008
-rw-r--r-- 1 root root 1045430272 2012-12-17 17:31 test.img
[email protected]:~#
server2.example.com:
ls -l /data
[email protected]:~# ls -l /data
total 256008
-rw-r--r-- 1 root root 1046478848 2012-12-17 17:27 test.img
[email protected]:~#
server3.example.com:
ls -l /data
[email protected]:~# ls -l /data
total 256008
-rw-r--r-- 1 root root 1047527424 2012-12-17 17:26 test.img
[email protected]:~#
server4.example.com:
ls -l /data
[email protected]:~# ls -l /data
total 256008
-rw-r--r-- 1 root root 1048576000 2012-12-17 17:30 test.img
[email protected]:~#
5 Links
- GlusterFS: http://www.gluster.org/
- GlusterFS 3.2 Documentation: http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/index.html
- Ubuntu: http://www.ubuntu.com/