Distributed Storage Across Four Storage Nodes With GlusterFS On Debian Lenny
Author: Falko Timme
Last edited 06/02/2009
This tutorial shows how to combine four single storage servers (running Debian Lenny) to one large storage server (distributed storage) with GlusterFS. The client system (Debian Lenny as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86-64 servers with SATA-II RAID and Infiniband HBA.
Please note that this kind of storage (distributed storage) doesn't provide any high-availability features, as would be the case with replicated storage.
I do not issue any guarantee that this will work for you!
1 Preliminary Note
In this tutorial I use five systems, four servers and a client:
- server1.example.com: IP address 192.168.0.100 (server)
- server2.example.com: IP address 192.168.0.101 (server)
- server3.example.com: IP address 192.168.0.102 (server)
- server4.example.com: IP address 192.168.0.103 (server)
- client1.example.com: IP address 192.168.0.104 (client)
All five systems should be able to resolve the other systems' hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it looks as follows on all five systems:
127.0.0.1 localhost.localdomain localhost 192.168.0.100 server1.example.com server1 192.168.0.101 server2.example.com server2 192.168.0.102 server3.example.com server3 192.168.0.103 server4.example.com server4 192.168.0.104 client1.example.com client1 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts
(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don't have to care about whether the hostnames can be resolved or not.)
2 Setting Up The GlusterFS Servers
GlusterFS isn't available as a Debian package for Debian Lenny, therefore we have to build it ourselves. First we install the prerequisites:
aptitude install sshfs build-essential flex bison byacc libdb4.6 libdb4.6-dev
Then we download the latest GlusterFS release from http://www.gluster.org/download.php and build it as follows:
tar xvfz glusterfs-2.0.1.tar.gz
./configure --prefix=/usr > /dev/null
server1:/tmp/glusterfs-2.0.1# ./configure --prefix=/usr > /dev/null
GlusterFS configure summary
FUSE client : no
Infiniband verbs : no
epoll IO multiplex : yes
Berkeley-DB : yes
libglusterfsclient : yes
mod_glusterfs : no ()
argp-standalone : no
make && make install
should now show the GlusterFS version that you've just compiled (2.0.1 in this case):
server1:/tmp/glusterfs-2.0.1# glusterfs --version
glusterfs 2.0.1 built on May 29 2009 17:23:10
Repository revision: 5c1d9108c1529a1155963cb1911f8870a674ab5b
Copyright (c) 2006-2009 Z RESEARCH Inc. <http://www.zresearch.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
Next we create a few directories:
Now we create the GlusterFS server configuration file /etc/glusterfs/glusterfsd.vol which defines which directory will be exported (/data/export) and what client is allowed to connect (192.168.0.104 = client1.example.com):
volume posix type storage/posix option directory /data/export end-volume volume locks type features/locks subvolumes posix end-volume volume brick type performance/io-threads option thread-count 8 subvolumes locks end-volume volume server type protocol/server option transport-type tcp option auth.addr.brick.allow 192.168.0.104 subvolumes brick end-volume
Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g. 192.168.0.104,192.168.0.105).
Afterwards we create the system startup links for the glusterfsd init script...
update-rc.d glusterfsd defaults
... and start glusterfsd: