Striping Across Four Storage Nodes With GlusterFS On Debian Lenny - Page 2

3 Setting Up The GlusterFS Client

client1.example.com:

On the client, we need to install fuse and GlusterFS. Instead of installing the libfuse2 package from the Debian repository, we install a patched version with better support for GlusterFS.

First we install the prerequisites again:

aptitude install sshfs build-essential flex bison byacc libdb4.6 libdb4.6-dev

Then we build fuse as follows (you can find the latest patched fuse version on ftp://ftp.zresearch.com/pub/gluster/glusterfs/fuse/):

cd /tmp
wget ftp://ftp.zresearch.com/pub/gluster/glusterfs/fuse/fuse-2.7.4glfs11.tar.gz
tar -zxvf fuse-2.7.4glfs11.tar.gz
cd fuse-2.7.4glfs11
./configure
make && make install

Afterwards we build GlusterFS (just like on the server)...

cd /tmp
wget http://ftp.gluster.com/pub/gluster/glusterfs/2.0/LATEST/glusterfs-2.0.1.tar.gz
tar xvfz glusterfs-2.0.1.tar.gz
cd glusterfs-2.0.1
./configure --prefix=/usr > /dev/null
make && make install
ldconfig
glusterfs --version

... and create the following two directories:

mkdir /mnt/glusterfs
mkdir /etc/glusterfs

Next we create the file /etc/glusterfs/glusterfs.vol:

vi /etc/glusterfs/glusterfs.vol
volume remote1
  type protocol/client
  option transport-type tcp/client
  option remote-host server1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp/client
  option remote-host server2.example.com
  option remote-subvolume brick
end-volume

volume remote3
  type protocol/client
  option transport-type tcp/client
  option remote-host server3.example.com
  option remote-subvolume brick
end-volume

volume remote4
  type protocol/client
  option transport-type tcp/client
  option remote-host server4.example.com
  option remote-subvolume brick
end-volume

volume stripe
  type cluster/stripe
  option block-size 1MB
  subvolumes remote1 remote2 remote3 remote4
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes stripe
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Make sure you use the correct server hostnames or IP addresses in the option remote-host lines!

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs

or

mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs

You should now see the new share in the outputs of...

mount
client1:~# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse.glusterfs (rw,max_read=131072,allow_other,default_permissions)
client1:~#

... and...

df -h
client1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              29G  896M   27G   4% /
tmpfs                 126M     0  126M   0% /lib/init/rw
udev                   10M   80K   10M   1% /dev
tmpfs                 126M     0  126M   0% /dev/shm
/etc/glusterfs/glusterfs.vol
                      105G  3.4G   96G   4% /mnt/glusterfs
client1:~#

(server1.example.com, server2.example.com, server3.example.com, and server4.example.com each have about 26GB of space for the GlusterFS filesystem, so that the resulting share has a size of about 4 x 26GB (105GB).)

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab
[...]
/etc/glusterfs/glusterfs.vol  /mnt/glusterfs  glusterfs  defaults  0  0

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of...

df -h

... and...

mount

 

4 Testing

Now let's create a big test file on the GlusterFS share:

client1.example.com:

dd if=/dev/zero of=/mnt/glusterfs/test.img bs=1024k count=1000
ls -l /mnt/glusterfs
client1:~# ls -l /mnt/glusterfs
total 1028032
-rw-r--r-- 1 root root 1048576000 2009-06-03 20:51 test.img
client1:~#

Now let's check the /data/export directory on server1.example.com, server2.example.com, server3.example.com, and server4.example.com. You should see the test.img file on each node, but with different sizes (due to data striping):

server1.example.com:

ls -l /data/export
server1:~# ls -l /data/export
total 257008
-rw-r--r-- 1 root root 1045430272 2009-06-03 20:51 test.img
server1:~#

server2.example.com:

ls -l /data/export
server2:~# ls -l /data/export
total 257008
-rw-r--r-- 1 root root 1046478848 2009-06-03 20:55 test.img
server2:~#

server3.example.com:

ls -l /data/export
server3:~# ls -l /data/export
total 257008
-rw-r--r-- 1 root root 1047527424 2009-06-03 20:54 test.img
server3:~#

server4.example.com:

ls -l /data/export
server4:~# ls -l /data/export
total 257008
-rw-r--r-- 1 root root 1048576000 2009-06-03 20:02 test.img
server4:~#

 

Share this page:

0 Comment(s)