High-Availability Storage with GlusterFS on CentOS 7 - Mirror across two storage servers

This tutorial shows how to set up a high-availability storage with two storage servers (CentOS 7.2) that use GlusterFS. Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage servers. The client system (CentOS 7.2 as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86_64 servers with SATA-II RAID and Infiniband HBA.

 

1 Preliminary Note

In this tutorial I use three systems, two servers, and a client:

  • server1.example.com: IP address 192.168.0.100 (server)
  • server2.example.com: IP address 192.168.0.101 (server)
  • client1.example.com: IP address 192.168.0.102 (client)

All three systems should be able to resolve the other systems' hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it looks as follows on all three systems:

nano /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.0.100   server1.example.com     server1
192.168.0.101   server2.example.com     server2
192.168.0.102   client1.example.com     client1

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don't have to care about whether the hostnames can be resolved or not.)

 

2 Enable additional Repositories

server1.example.com/server2.example.com/client1.example.com:

First, we import the GPG keys for software packages:

rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*

Then we enable the EPEL 7 repository on our CentOS systems:

yum -y install epel-release

yum -y install yum-priorities

Edit /etc/yum.repos.d/epel.repo...

nano /etc/yum.repos.d/epel.repo

... and add the line priority=10 to the [epel] section:

[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
priority=10
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[...]

Then we update our existing packages on the system:

yum -y update

 

3 Setting up the GlusterFS Servers

server1.example.com/server2.example.com:

GlusterFS is available in the repository of the CentOS storage special interest group. Install the repository with this command:

yum -y install centos-release-gluster

Then install the GlusterFS server as follows:

yum -y install glusterfs-server

Create the system startup links for the Gluster daemon and start it:

systemctl enable glusterd.service
systemctl start glusterd.service

The command

glusterfsd --version

should now show the GlusterFS version that you've just installed (3.7.12 in this case):

[root@server1 ~]# glusterfsd --version
glusterfs 3.7.12 built on Jun 24 2016 14:11:19
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

If you use a firewall, ensure that TCP ports 111, 24007, 24008, 24009-(24009 + number of bricks across all volumes) are open on server1.example.com and server2.example.com.

Next, we must add server2.example.com to the trusted storage pool (please note that I'm running all GlusterFS configuration commands from server1.example.com, but you can as well run them from server2.example.com because the configuration is repliacted between the GlusterFS nodes - just make sure you use the correct hostnames or IP addresses):

server1.example.com:

On server1.example.com, run

gluster peer probe server2.example.com

[root@server1 ~]# gluster peer probe server2.example.com
peer probe: success.

The status of the trusted storage pool should now be similar to this:

gluster peer status

[root@server1 ~]# gluster peer status

Number of Peers: 1

Hostname: server2.example.com
Uuid: 582e10da-aa1b-40b8-908c-213f16f57fe5
State: Peer in Cluster (Connected)

Next, we create the share named testvol with two replicas (please note that the number of replicas is equal to the number of servers in this case because we want to set up mirroring) on server1.example.com and server2.example.com in the /data directory (this will be created if it doesn't exist):

gluster volume create testvol replica 2 transport tcp server1.example.com:/data server2.example.com:/data force

[root@server1 ~]# gluster volume create testvol replica 2 transport tcp server1.example.com:/data server2.example.com:/data force
volume create: testvol: success: please start the volume to access data
[root@server1 ~]#

Start the volume:

gluster volume start testvol

The result should be:

[root@server1 ~]# gluster volume start testvol
volume start: testvol: success
[root@server1 ~]#

It is possible that the above command tells you that the action was not successful:

[root@server1 ~]# gluster volume start testvol
Starting volume testvol has been unsuccessful
[root@server1 ~]#

In this case, you should check the output of...

server1.example.com/server2.example.com:

netstat -tap | grep glusterfsd

on both servers.

If you get output like this...

[root@server1 ~]# netstat -tap | grep glusterfsd
tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 22880/glusterfsd
tcp 0 0 server1.example.c:49152 server2.example.c:49148 ESTABLISHED 22880/glusterfsd
tcp 0 0 server1.example.c:49152 server1.example.c:49148 ESTABLISHED 22880/glusterfsd
tcp 0 0 server1.example.c:49150 server1.example.c:24007 ESTABLISHED 22880/glusterfsd
tcp 0 0 server1.example.c:49152 server2.example.c:49142 ESTABLISHED 22880/glusterfsd
tcp 0 0 server1.example.c:49152 server1.example.c:49149 ESTABLISHED 22880/glusterfsd
[root@server1 ~]#

... everything is fine, but if you don't get any output...

[root@server2 ~]# netstat -tap | grep glusterfsd
[root@server2 ~]#

... restart the GlusterFS daemon on the corresponding server (server2.example.com in this case):

server2.example.com:

systemctl restart glusterd.service

Then check the output of...

netstat -tap | grep glusterfsd

... again on that server - it should now look like this:

[root@server2 ~]# netstat -tap | grep glusterfsd
tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 10971/glusterfsd
tcp 0 0 server2.example.c:49152 server1.example.c:49140 ESTABLISHED 10971/glusterfsd
tcp 0 0 server2.example.c:49152 server2.example.c:49149 ESTABLISHED 10971/glusterfsd
tcp 0 0 server2.example.c:49152 server2.example.c:49143 ESTABLISHED 10971/glusterfsd
tcp 0 0 server2.example.c:49152 server1.example.c:49142 ESTABLISHED 10971/glusterfsd
tcp 0 0 server2.example.c:49150 server2.example.c:24007 ESTABLISHED 10971/glusterfsd
[root@server2 ~]#

Now back to server1.example.com:

server1.example.com:

You can check the status of the volume with the command

gluster volume info

[root@server1 ~]# gluster volume info

Volume Name: testvol
Type: Replicate
Volume ID: e1f825ca-c9d9-4eeb-b6c5-d62c4aa02376
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1.example.com:/data
Brick2: server2.example.com:/data
Options Reconfigured:
performance.readdir-ahead: on
[root@server1 ~]#

By default, all clients can connect to the volume. If you want to grant access to client1.example.com (= 192.168.1.102) only, run:

gluster volume set testvol auth.allow 192.168.1.102

Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g. 192.168.1.102,192.168.1.103).

The volume info should now show the updated status:

gluster volume info

[root@server1 ~]# gluster volume info

Volume Name: testvol
Type: Replicate
Volume ID: e1f825ca-c9d9-4eeb-b6c5-d62c4aa02376
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1.example.com:/data
Brick2: server2.example.com:/data
Options Reconfigured:
auth.allow: 192.168.1.102
performance.readdir-ahead: on
[root@server1 ~]#

 

4 Setting Up the GlusterFS Client

client1.example.com:

On the client, we can install the GlusterFS client as follows:

yum -y install glusterfs-client

Then we create the following directory:

mkdir /mnt/glusterfs

That's it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with the following command:

mount.glusterfs server1.example.com:/testvol /mnt/glusterfs

(Instead of server1.example.com you can as well use server2.example.com in the above command!)

You should now see the new share in the outputs of...

mount

[root@client1 ~]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=930336k,nr_inodes=232584,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/centos-root on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=34,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
/dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=188060k,mode=700)
server1.example.com:/testvol on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
[root@client1 ~]#

... and...

df -h

[root@client1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 28G 1.3G 27G 5% /
devtmpfs 909M 0 909M 0% /dev
tmpfs 919M 0 919M 0% /dev/shm
tmpfs 919M 8.6M 910M 1% /run
tmpfs 919M 0 919M 0% /sys/fs/cgroup
/dev/sda1 497M 192M 306M 39% /boot
tmpfs 184M 0 184M 0% /run/user/0
server1.example.com:/testvol 28G 12G 17G 41% /mnt/glusterfs
[root@client1 ~]#

Instead of mounting the GlusterFS share manually on the client, you add the mount command to /etc/rc.local file. I will not add it to /etc/fstab as rc.local is always executed after the network is up which is required for a network file system

Open /etc/rc.local and append the following line:

nano /etc/rc.local

[...]
/usr/sbin/mount.glusterfs server1.example.com:/testvol /mnt/glusterfs

(Again, instead of server1.example.com you can as well use server2.example.com!)

To test if your modified /etc/rc.local is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of...

df -h

... and...

mount

 

5 Testing

Now let's create some test files on the GlusterFS share:

client1.example.com:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2

Now let's check the /data directory on server1.example.com and server2.example.com. The test1 and test2 files should be present on each node:

server1.example.com/server2.example.com:

ls -l /data

[root@server1 ~]# ls -l /data
total 0
-rw-r--r--. 2 root root 0 Jul 1 2016 test1
-rw-r--r--. 2 root root 0 Jul 1 2016 test2
[root@server1 ~]

Now we shut down server1.example.com and add/delete some files on the GlusterFS share on client1.example.com.

server1.example.com:

shutdown -h now

client1.example.com:

touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
rm -f /mnt/glusterfs/test2

The commands may take some time to execute as the Glusterfs switches to server2 after he can not reach server1 anymore. We can see here the fault tolerance of the system as we can still work on our data storage share when server1 is offline. The changes should be visible in the /data directory on server2.example.com:

server2.example.com:

ls -l /data

[root@server2 ~]# ls -l /data
total 8
-rw-r--r--. 2 root root 0 Jul 1 15:17 test1
-rw-r--r--. 2 root root 0 Jul 1 15:19 test3
-rw-r--r--. 2 root root 0 Jul 1 15:19 test4

Let's boot server1.example.com again and take a look at the /data directory:

server1.example.com:

ls -l /data

[root@server1 ~]# ls -l /data
total 8
-rw-r--r--. 2 root root 0 Jul 1 15:17 test1
-rw-r--r--. 2 root root 0 Jul 1 15:19 test2
[root@server1 ~]#

As you see, server1.example.com automatically synced the changed. In case that the change has not been synced yet, then this is easy to fix, all we need to do is invoke a read command on the GlusterFS share on client1.example.com, e.g.:

client1.example.com:

ls -l /mnt/glusterfs/

[root@client1 ~]# ls -l /data
total 8
-rw-r--r--. 2 root root 0 Jul 1 15:17 test1
-rw-r--r--. 2 root root 0 Jul 1 15:19 test3
-rw-r--r--. 2 root root 0 Jul 1 15:19 test4
[root@server1 ~]#

Now take a look at the /data directory on server1.example.com again, and you should see that the changes have been replicated to that node:

server1.example.com:

ls -l /data

[root@server1 ~]# ls -l /data
total 8
-rw-r--r--. 2 root root 0 Jul 1 15:17 test1
-rw-r--r--. 2 root root 0 Jul 1 15:19 test3
-rw-r--r--. 2 root root 0 Jul 1 15:19 test4
[root@server1 ~]#

 

Share this page:

Suggested articles

10 Comment(s)

Add comment

Comments

From: Thomas Koo

Great post.

Could you suggest a GUI Managment tool for glueterfs.

From: ahmeddaipa

yes yes 

From: fphilippon

That's great thanks.

I think you mean the 3.7.12 version instead of the 3.2.12 right?

From: till

Yes. Thank you for the hint, I've corrected thy typo.

From: rajesh

Thanks great post

From: kamarul

If for example we mount from server1, for some reasons the server goes down and we lost the mounted volume, if there any solution that cater virtual ip for HA so that we mount from it rather than from single single server? the vip will automatically move or run on the second server.

From: Raymond Henick

You're thinking of it from the wrong perspective I think.  pacemaker/corosync used to do it this way with a heartbeat.

Using glusterfs, the ip doesn't need to change because gluster uses bricks and syncs on its own based on the configuration of the bricks... so the ip address never actually needs to change.  point the client at the location of a brick and gluster does the rest.

From: Martijn

 Raymond, it's true that you can connect to any brick and the GlusterFS FUSE client will automatically discover the other bricks and connect to them as well. If the initial brick fails your mount will failover to one of the other bricks.

However, if you reboot a client host and the brick that you've set it to initially connect to (in /etc/fstab) is down than the client won't connect at all, until you point it to another brick to bootstrap it.

This can be a problem in a scenario where clients are rebooted or added while the 'primary' brick is down. For example in Amazon AWS, suppose you have two replicating GlusterFS bricks in separate Availability Zones. When the AZ that contains your 'primary' fails or loses connectivity there's a good chance that you'll autoscale additional servers in the other AZ to cope with the increased load there. Since the 'primary' is unreachable those servers can't mount the filesystem, until you configure them to mount the other brick.

How would you prevent that?

From: AnotherLinuxGuy

Where as this will setup Gluster, it's not 100% correct. Libvirt will not work with this configuration reliably, with the result of;

 

libvirtd[4674]: segfault at 7f6888ec9500 ip 00007f688ab8a549 sp 00007f68802036f0 error 4 in \ afr.so[7f688ab40000+6a000]                                                                                                                        

 

libvirtd[4280]: segfault at 7ff5d2c7f440 ip 00007ff621880b66 sp 00007ff5e46cd4c0 error 4 in   \ libglusterfs.so.0.0.1[7ff621831000+d5000]  

From: John Weidauer

I have setup an VM enviro with the same setup as in your walk-through, CentOS 7, Gluster 3.10.3.  I have both servers up, created a volume, which gets created on server 1 and server 2, I touched files from my client with a mount to the volume on server 1, files get created but do not replicate to server 2.

netstat on server 1 lists server 1 and 2 and the client, netstat on server 2 only lists server 2.  I run a gluster heal on the volume on server 1 and I get an error on the server 2 brick, Transport endpoint is not connected, but running the heal on server 2, it connects and reports the number of entries on server 1 (5) and server 2 number of entries as 0, but will not sync.

I have restarted the service, rebooted, I have SELinux disabled, can you provide any help?  I know logs can help, but I'm just looking for a quick response and not for you to diagnose my problem, thanks in advance.

Server 1: netstat -tap | grep glusterfsd

tcp        0      0 0.0.0.0:49152           0.0.0.0:*              LISTEN           3781/glusterfsd     

tcp        0      0 server1:49134          server1:24007       ESTABLISHED 3781/glusterfsd     

tcp        0      0 server1:49152          client1:1020          ESTABLISHED 3781/glusterfsd     

tcp        0      0 server1:49152          server2:49143       ESTABLISHED 3781/glusterfsd     

tcp        0      0 server1:49152          server1:49136       ESTABLISHED 3781/glusterfsd  

Server 2: netstat -tap | grep glusterfsd

tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN          3749/glusterfsd     

tcp        0      0 server2:49152          server2:49149       ESTABLISHED 3749/glusterfsd     

tcp        0      0 server2:49142          server2:24007       ESTABLISHED 3749/glusterfsd