There is a new version of this tutorial available for CentOS 7.2.

High-Availability Storage With GlusterFS 3.2.x On CentOS 6.3 - Automatic File Replication (Mirror) Across Two Storage Servers

This tutorial shows how to set up a high-availability storage with two storage servers (CentOS 6.3) that use GlusterFS. Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage servers. The client system (CentOS 6.3 as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86_64 servers with SATA-II RAID and Infiniband HBA.

I do not issue any guarantee that this will work for you!


1 Preliminary Note

In this tutorial I use three systems, two servers and a client:

  • IP address (server)
  • IP address (server)
  • IP address (client)

All three systems should be able to resolve the other systems' hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it looks as follows on all three systems:

vi /etc/hosts   localhost localhost.localdomain localhost4 localhost4.localdomain4     server1     server2     client1

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don't have to care about whether the hostnames can be resolved or not.)


2 Enable Additional Repositories

First we import the GPG keys for software packages:

rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*

Then we enable the EPEL6 repository on our CentOS systems:

rpm --import

cd /tmp
rpm -ivh epel-release-6-7.noarch.rpm

yum install yum-priorities

Edit /etc/yum.repos.d/epel.repo...

vi /etc/yum.repos.d/epel.repo

... and add the line priority=10 to the [epel] section:

name=Extra Packages for Enterprise Linux 6 - $basearch


3 Setting Up The GlusterFS Servers

GlusterFS is available as a package for EPEL, therefore we can install it as follows:

yum install glusterfs-server

Create the system startup links for the Gluster daemon and start it:

chkconfig --levels 235 glusterd on
/etc/init.d/glusterd start

The command

glusterfsd --version

should now show the GlusterFS version that you've just installed (3.2.7 in this case):

[root@server1 ~]# glusterfsd --version
glusterfs 3.2.7 built on Jun 11 2012 13:22:28
Repository revision: git://
Copyright (c) 2006-2011 Gluster Inc. <>
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@server1 ~]#

If you use a firewall, ensure that TCP ports 111, 24007, 24008, 24009-(24009 + number of bricks across all volumes) are open on and

Next we must add to the trusted storage pool (please note that I'm running all GlusterFS configuration commands from, but you can as well run them from because the configuration is repliacted between the GlusterFS nodes - just make sure you use the correct hostnames or IP addresses):

On, run

gluster peer probe

[root@server1 ~]# gluster peer probe
Probe successful
[root@server1 ~]#

The status of the trusted storage pool should now be similar to this:

gluster peer status

[root@server1 ~]# gluster peer status
Number of Peers: 1

Uuid: 7cd93007-fccb-4fcb-8063-133e6ba81cd9
State: Peer in Cluster (Connected)
[root@server1 ~]#

Next we create the share named testvol with two replicas (please note that the number of replicas is equal to the number of servers in this case because we want to set up mirroring) on and in the /data directory (this will be created if it doesn't exist):

gluster volume create testvol replica 2 transport tcp

[root@server1 ~]# gluster volume create testvol replica 2 transport tcp
Creation of volume testvol has been successful. Please start the volume to access data.
[root@server1 ~]#

Start the volume:

gluster volume start testvol

It is possible that the above command tells you that the action was not successful:

[root@server1 ~]# gluster volume start testvol
Starting volume testvol has been unsuccessful
[root@server1 ~]#

In this case you should check the output of...

netstat -tap | grep glusterfsd

on both servers.

If you get output like this...

[root@server1 ~]# netstat -tap | grep glusterfsd
tcp        0      0 *:24009                     *:*                         LISTEN      1365/glusterfsd
tcp        0      0 localhost:1023              localhost:24007             ESTABLISHED 1365/glusterfsd
tcp        0      0    ESTABLISHED 1365/glusterfsd
[root@server1 ~]#

... everything is fine, but if you don't get any output...

[root@server2 ~]# netstat -tap | grep glusterfsd
[root@server2 ~]#

... restart the GlusterFS daemon on the corresponding server ( in this case):

/etc/init.d/glusterfsd restart

Then check the output of...

netstat -tap | grep glusterfsd

... again on that server - it should now look like this:

[root@server2 ~]# netstat -tap | grep glusterfsd
tcp        0      0 *:24010                 *:*                     LISTEN      1458/glusterfsd
tcp        0      0 localhost.localdom:1021 localhost.localdo:24007 ESTABLISHED 1458/glusterfsd
[root@server2 ~]#

Now back to

You can check the status of the volume with the command

gluster volume info

[root@server1 ~]# gluster volume info

Volume Name: testvol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
[root@server1 ~]#

By default, all clients can connect to the volume. If you want to grant access to (= only, run:

gluster volume set testvol auth.allow

Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g.,

The volume info should now show the updated status:

gluster volume info

[root@server1 ~]# gluster volume info

Volume Name: testvol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Options Reconfigured:
[root@server1 ~]#


4 Setting Up The GlusterFS Client

On the client, we can install the GlusterFS client as follows:

yum install glusterfs-client

Then we create the following directory:

mkdir /mnt/glusterfs

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with the following command:

mount.glusterfs /mnt/glusterfs

(Instead of you can as well use in the above command!)

You should now see the new share in the outputs of...


[root@client1 ~]# mount
/dev/mapper/vg_client1-LogVol00 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
[root@client1 ~]#

... and...

df -h

[root@client1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
                      9.7G  1.7G  7.5G  19% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/sda1             504M   39M  440M   9% /boot
                       29G  1.1G   27G  4% /mnt/glusterfs
[root@client1 ~]#

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab

[...] /mnt/glusterfs glusterfs defaults,_netdev 0 0

(Again, instead of you can as well use!)

To test if your modified /etc/fstab is working, reboot the client:


After the reboot, you should find the share in the outputs of...

df -h

... and...



5 Testing

Now let's create some test files on the GlusterFS share:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2

Now let's check the /data directory on and The test1 and test2 files should be present on each node:

ls -l /data

[root@server1 ~]# ls -l /data
total 8
-rw-r--r-- 1 root root 0 2012-12-17 11:17 test1
-rw-r--r-- 1 root root 0 2012-12-17 11:17 test2
[root@server1 ~]#

Now we shut down and add/delete some files on the GlusterFS share on

shutdown -h now

touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
rm -f /mnt/glusterfs/test2

The changes should be visible in the /data directory on

ls -l /data

[root@server2 ~]# ls -l /data
total 8
-rw-r--r-- 1 root root 0 2012-12-17 11:17 test1
-rw-r--r-- 1 root root 0 2012-12-17 11:38 test3
-rw-r--r-- 1 root root 0 2012-12-17 11:38 test4
[root@server2 ~]#

Let's boot again and take a look at the /data directory:

ls -l /data

[root@server1 ~]# ls -l /data
total 8
-rw-r--r-- 1 root root 0 2012-12-17 11:17 test1
-rw-r--r-- 1 root root 0 2012-12-17 11:17 test2
[root@server1 ~]#

As you see, hasn't noticed the changes that happened while it was down. This is easy to fix, all we need to do is invoke a read command on the GlusterFS share on, e.g.:

ls -l /mnt/glusterfs/

[root@client1 ~]# ls -l /mnt/glusterfs/
total 8
-rw-r--r-- 1 root root 0 2012-12-17 11:17 test1
-rw-r--r-- 1 root root 0 2012-12-17 11:38 test3
-rw-r--r-- 1 root root 0 2012-12-17 11:38 test4
[root@client1 ~]#

Now take a look at the /data directory on again, and you should see that the changes have been replicated to that node:

ls -l /data

[root@server1 ~]# ls -l /data
total 4
-rw-r--r-- 1 root root 0 2012-12-17 11:17 test1
-rw-r--r-- 1 root root 0 2012-12-17 11:38 test3
-rw-r--r-- 1 root root 0 2012-12-17 11:38 test4
[root@server1 ~]#


Share this page:

Suggested articles

4 Comment(s)

Add comment


From: Alex

Adding your glusterfs mount to /etc/fstab with the "defaults,_netdev" parameters won't work and will prevent your system booting.

Mounting on boot uses mount.gluster fs, not the "mount" command, and mount.glusterfs doesn't recognize _netdev, so your system will hang trying to "mount local filesystems' before it loads network services.

Instead you'll need to configure a startup script that will run after network services have loaded.

From: Dave

Thanks for this tutorial.  I have a question. 

 After the gluster volume is mounted locally on both the servers, and then you mount that from the client1 to server1.  You can then create / edit files.  But in your example, you then shutdown server1 and have client1 make changes, but since client1 specificly used server1 as a mount point, the client freezes when trying to make changes. 

 Would that scenario require a load balancer or something? 



From: sm1ly

do u have any suggestion? cause I got freeze too. I only can make something like vrrp balance with virtual ip. but I dont know what problems could be

From: Zeke

how to configure gluster  without client?

im using gluster for mail server.