Distributed Storage Across Four Storage Nodes With GlusterFS On Debian Lenny - Page 2

Want to support HowtoForge? Become a subscriber!
 
Submitted by falko (Contact Author) (Forums) on Tue, 2009-06-23 16:29. ::

3 Setting Up The GlusterFS Client

client1.example.com:

On the client, we need to install fuse and GlusterFS. Instead of installing the libfuse2 package from the Debian repository, we install a patched version with better support for GlusterFS.

First we install the prerequisites again:

aptitude install sshfs build-essential flex bison byacc libdb4.6 libdb4.6-dev

Then we build fuse as follows (you can find the latest patched fuse version on ftp://ftp.zresearch.com/pub/gluster/glusterfs/fuse/):

cd /tmp
wget ftp://ftp.zresearch.com/pub/gluster/glusterfs/fuse/fuse-2.7.4glfs11.tar.gz
tar -zxvf fuse-2.7.4glfs11.tar.gz
cd fuse-2.7.4glfs11
./configure
make && make install

Afterwards we build GlusterFS (just like on the server)...

cd /tmp
wget http://ftp.gluster.com/pub/gluster/glusterfs/2.0/LATEST/glusterfs-2.0.1.tar.gz
tar xvfz glusterfs-2.0.1.tar.gz
cd glusterfs-2.0.1
./configure --prefix=/usr > /dev/null

make && make install
ldconfig
glusterfs --version

... and create the following two directories:

mkdir /mnt/glusterfs
mkdir /etc/glusterfs

Next we create the file /etc/glusterfs/glusterfs.vol:

vi /etc/glusterfs/glusterfs.vol

volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host server1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host server2.example.com
  option remote-subvolume brick
end-volume

volume remote3
  type protocol/client
  option transport-type tcp
  option remote-host server3.example.com
  option remote-subvolume brick
end-volume

volume remote4
  type protocol/client
  option transport-type tcp
  option remote-host server4.example.com
  option remote-subvolume brick
end-volume

volume distribute
  type cluster/distribute
  subvolumes remote1 remote2 remote3 remote4
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes distribute
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Make sure you use the correct server hostnames or IP addresses in the option remote-host lines!

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs

or

mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs

You should now see the new share in the outputs of...

mount

client1:/tmp/glusterfs-2.0.1# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse.glusterfs (rw,max_read=131072,allow_other,default_permissions)
client1:/tmp/glusterfs-2.0.1#

... and...

df -h

client1:/tmp/glusterfs-2.0.1# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              29G  935M   27G   4% /
tmpfs                 126M     0  126M   0% /lib/init/rw
udev                   10M   80K   10M   1% /dev
tmpfs                 126M     0  126M   0% /dev/shm
/etc/glusterfs/glusterfs.vol
                      105G  3.4G   96G   4% /mnt/glusterfs
client1:/tmp/glusterfs-2.0.1#

(server1.example.com, server2.example.com, server3.example.com, and server4.example.com each have about 26GB of space for the GlusterFS filesystem, so that the resulting share has a size of about 4 x 26GB (105GB).)

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab

[...]
/etc/glusterfs/glusterfs.vol  /mnt/glusterfs  glusterfs  defaults  0  0

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of...

df -h

... and...

mount

 

4 Testing

Now let's create some test files on the GlusterFS share:

client1.example.com:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2
touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
touch /mnt/glusterfs/test5
touch /mnt/glusterfs/test6

Now let's check the /data/export directory on server1.example.com, server2.example.com, server3.example.com, and server4.example.com. You will notice that each storage node holds only a part of the files/directories that make up the GlusterFS share on the client:

server1.example.com:

ls -l /data/export

server1:/tmp/glusterfs-2.0.1# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-06-02 18:04 test1
-rw-r--r-- 1 root root 0 2009-06-02 18:05 test2
-rw-r--r-- 1 root root 0 2009-06-02 18:06 test5
server1:/tmp/glusterfs-2.0.1#

server2.example.com:

ls -l /data/export

server2:/tmp/glusterfs-2.0.1# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-06-02 18:06 test4
server2:/tmp/glusterfs-2.0.1#

server3.example.com:

ls -l /data/export

server3:/tmp/glusterfs-2.0.1# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-06-02 18:07 test6
server3:/tmp/glusterfs-2.0.1#

server4.example.com:

ls -l /data/export

server4:/tmp/glusterfs-2.0.1# ls -l /data/export
total 0
-rw-r--r-- 1 root root 0 2009-06-02 18:06 test3
server4:/tmp/glusterfs-2.0.1#

 

5 Links


Please do not use the comment function to ask for help! If you need help, please use our forum.
Comments will be published after administrator approval.
Submitted by marco (not registered) on Thu, 2009-06-25 14:15.

Very interesting example and, above all, easy to understand.

Anyway I have a question:

what happens if one of the server goes down? On the client side I'll see less space available or does I'll have a lost of data?

How is possible to configure an high-availability solution? (ex: if server1 goes down I would like to have the ability to work without any problem...)

Thanks