Creating An NFS-Like Standalone Storage Server With GlusterFS On Fedora 13

This tutorial exists for these OS versions

On this page

  1. 1 Preliminary Note
  2. 2 Setting Up The GlusterFS Server
  3. 3 Setting Up The GlusterFS Client
  4. 4 Links

This tutorial shows how to set up a standalone storage server on Fedora 13. Instead of NFS, I will use GlusterFS here. The client system will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86_64 servers with SATA-II RAID and Infiniband HBA.

I do not issue any guarantee that this will work for you!


1 Preliminary Note

In this tutorial I use two systems, a server and a client:

  • IP address (server)
  • IP address (client)

Both systems should be able to resolve the other system's hostname. If this cannot be done through DNS, you should edit the /etc/hosts file so that it contains the following two lines on both systems:

vi /etc/hosts

[...]  server1  client1

(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don't have to care about whether the hostnames can be resolved or not.)


2 Setting Up The GlusterFS Server

The GlusterFS server is available as a package for Fedora 13, therefore we can install it as follows:

yum install glusterfs-server

The command

glusterfs --version

should now show the GlusterFS version that you've just installed (2.0.9 in this case):

[root@server1 ~]# glusterfs --version
glusterfs 2.0.9 built on Apr 11 2010 20:39:55
Repository revision: v2.0.9
Copyright (c) 2006-2009 Gluster Inc. <>
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@server1 ~]#

Next we create a few directories:

mkdir /data/
mkdir /data/export
mkdir /data/export-ns

Now we create the GlusterFS server configuration file /etc/glusterfs/glusterfsd.vol (we make a backup of the original /etc/glusterfs/glusterfsd.vol file first) which defines which directory will be exported (/data/export) and what client is allowed to connect ( =

cp /etc/glusterfs/glusterfsd.vol /etc/glusterfs/glusterfsd.vol_orig
cat /dev/null > /etc/glusterfs/glusterfsd.vol
vi /etc/glusterfs/glusterfsd.vol

volume posix
  type storage/posix
  option directory /data/export

volume locks
  type features/locks
  option mandatory-locks on
  subvolumes posix

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow # Edit and add list of allowed clients comma separated IP addrs(names) here
  subvolumes brick

Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g.,

Afterwards we create the system startup links for the GlusterFS server and start it:

chkconfig --levels 35 glusterfsd on
/etc/init.d/glusterfsd start


3 Setting Up The GlusterFS Client

There's a GlusterFS client rpm package for Fedora 13, but the problem with it is that you will get errors like df: `/mnt/glusterfs': Software caused connection abort or df: `/mnt/glusterfs': Transport endpoint is not connected when you try to access the GlusterFS share. That's why we build the GlusterFS client from the sources to avoid these problems.

Before we build the GlusterFS client, we install its prerequisites:

yum groupinstall 'Development Tools'

yum groupinstall 'Development Libraries'

yum install libibverbs-devel fuse-devel

Then we download the GlusterFS 2.0.9 sources (please note that this is the same version that is installed on the server!) and build GlusterFS as follows:

cd /tmp
tar xvfz glusterfs-2.0.9.tar.gz
cd glusterfs-2.0.9

At the end of the ./configure command, you should see something like this:

GlusterFS configure summary
FUSE client        : yes
Infiniband verbs   : yes
epoll IO multiplex : yes
Berkeley-DB        : yes
libglusterfsclient : yes
argp-standalone    : no

[root@client1 glusterfs-2.0.9]#

make && make install

Check the GlusterFS version afterwards (should be 2.0.9):

glusterfs --version

[root@client1 glusterfs-2.0.9]# glusterfs --version
glusterfs 2.0.9 built on Sep 27 2010 19:20:46
Repository revision: v2.0.9
Copyright (c) 2006-2009 Gluster Inc. <>
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@client1 glusterfs-2.0.9]#

Then we create the following two directories:

mkdir /mnt/glusterfs
mkdir /etc/glusterfs

Next we create the file /etc/glusterfs/glusterfs.vol:

vi /etc/glusterfs/glusterfs.vol

volume remote
  type protocol/client
  option transport-type tcp
  option remote-host # can be IP or hostname
  option remote-subvolume brick

volume writebehind
  type performance/write-behind
  option window-size 4MB
  subvolumes remote

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind

Make sure you use the correct server hostname or IP address in the option remote-host line!

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs


mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs

You should now see the new share in the outputs of...


[root@client1 glusterfs-2.0.9]# mount
/dev/mapper/vg_client1-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
[root@client1 glusterfs-2.0.9]#

... and...

df -h

[root@client1 glusterfs-2.0.9]# df -h
Filesystem            Size  Used Avail Use% Mounted on
                       29G  2.6G   25G  10% /
tmpfs                 185M     0  185M   0% /dev/shm
/dev/sda1             194M   23M  161M  13% /boot
                       29G  2.7G   25G  10% /mnt/glusterfs
[root@client1 glusterfs-2.0.9]#

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab

/etc/glusterfs/glusterfs.vol  /mnt/glusterfs  glusterfs  defaults  0  0

To test if your modified /etc/fstab is working, reboot the client:


After the reboot, you should find the share in the outputs of...

df -h

... and...



Share this page:

Suggested articles

0 Comment(s)

Add comment