Striping Across Four Storage Nodes With GlusterFS On Debian Lenny

This tutorial exists for these OS versions

    On this page

    1. 1 Preliminary Note
    2. 2 Setting Up The GlusterFS Servers

    This tutorial shows how to do data striping (segmentation of logically sequential data, such as a single file, so that segments can be assigned to multiple physical devices in a round-robin fashion and thus written concurrently) across four single storage servers (running Debian Lenny) with GlusterFS. The client system (Debian Lenny as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86-64 servers with SATA-II RAID and Infiniband HBA.

    Please note that this kind of storage doesn't provide any high-availability/fault tolerance features, as would be the case with replicated storage.

    I do not issue any guarantee that this will work for you!


    1 Preliminary Note

    In this tutorial I use five systems, four servers and a client:

    • IP address (server)
    • IP address (server)
    • IP address (server)
    • IP address (server)
    • IP address (client)

    All five systems should be able to resolve the other systems' hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it looks as follows on all five systems:

    vi /etc/hosts       localhost.localdomain   localhost     server1     server2     server3     server4     client1
    # The following lines are desirable for IPv6 capable hosts
    ::1     localhost ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    ff02::3 ip6-allhosts

    (It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don't have to care about whether the hostnames can be resolved or not.)


    2 Setting Up The GlusterFS Servers

    GlusterFS isn't available as a Debian package for Debian Lenny, therefore we have to build it ourselves. First we install the prerequisites:

    aptitude install sshfs build-essential flex bison byacc libdb4.6 libdb4.6-dev

    Then we download the latest GlusterFS release from and build it as follows:

    cd /tmp
    tar xvfz glusterfs-2.0.1.tar.gz
    cd glusterfs-2.0.1
    ./configure --prefix=/usr > /dev/null

    server1:/tmp/glusterfs-2.0.1# ./configure --prefix=/usr > /dev/null

    GlusterFS configure summary
    FUSE client        : no
    Infiniband verbs   : no
    epoll IO multiplex : yes
    Berkeley-DB        : yes
    libglusterfsclient : yes
    mod_glusterfs      : no ()
    argp-standalone    : no


    make && make install

    The command

    glusterfs --version

    should now show the GlusterFS version that you've just compiled (2.0.1 in this case):

    server1:/tmp/glusterfs-2.0.1# glusterfs --version
    glusterfs 2.0.1 built on May 29 2009 17:23:10
    Repository revision: 5c1d9108c1529a1155963cb1911f8870a674ab5b
    Copyright (c) 2006-2009 Z RESEARCH Inc. <>
    GlusterFS comes with ABSOLUTELY NO WARRANTY.
    You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

    Next we create a few directories:

    mkdir /data/
    mkdir /data/export
    mkdir /data/export-ns
    mkdir /etc/glusterfs

    Now we create the GlusterFS server configuration file /etc/glusterfs/glusterfsd.vol which defines which directory will be exported (/data/export) and what client is allowed to connect ( =

    vi /etc/glusterfs/glusterfsd.vol

    volume posix
      type storage/posix
      option directory /data/export
    volume locks
      type features/locks
      subvolumes posix
    volume brick
      type performance/io-threads
      option thread-count 8
      subvolumes locks
    volume server
      type protocol/server
      option transport-type tcp/server
      option auth.addr.brick.allow
      subvolumes brick

    Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g.,

    Afterwards we create the system startup links for the glusterfsd init script...

    update-rc.d glusterfsd defaults

    ... and start glusterfsd:

    /etc/init.d/glusterfsd start

    Share this page:

    0 Comment(s)

    Add comment