Comments on High-Availability Storage with GlusterFS on CentOS 7 - Mirror across two storage servers

This tutorial shows how to set up a high-availability storage with two storage servers (CentOS 7.2) that use GlusterFS. Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage servers. The client system (CentOS 7.2 as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes.

16 Comment(s)

Add comment

Please register in our forum first to comment.

Comments

By: Thomas Koo

Great post.

Could you suggest a GUI Managment tool for glueterfs.

By: ahmeddaipa

yes yes 

By: fphilippon

That's great thanks.

I think you mean the 3.7.12 version instead of the 3.2.12 right?

By: till

Yes. Thank you for the hint, I've corrected thy typo.

By: rajesh

Thanks great post

By: kamarul

If for example we mount from server1, for some reasons the server goes down and we lost the mounted volume, if there any solution that cater virtual ip for HA so that we mount from it rather than from single single server? the vip will automatically move or run on the second server.

By: Raymond Henick

You're thinking of it from the wrong perspective I think.  pacemaker/corosync used to do it this way with a heartbeat.

Using glusterfs, the ip doesn't need to change because gluster uses bricks and syncs on its own based on the configuration of the bricks... so the ip address never actually needs to change.  point the client at the location of a brick and gluster does the rest.

By: Martijn

 Raymond, it's true that you can connect to any brick and the GlusterFS FUSE client will automatically discover the other bricks and connect to them as well. If the initial brick fails your mount will failover to one of the other bricks.

However, if you reboot a client host and the brick that you've set it to initially connect to (in /etc/fstab) is down than the client won't connect at all, until you point it to another brick to bootstrap it.

This can be a problem in a scenario where clients are rebooted or added while the 'primary' brick is down. For example in Amazon AWS, suppose you have two replicating GlusterFS bricks in separate Availability Zones. When the AZ that contains your 'primary' fails or loses connectivity there's a good chance that you'll autoscale additional servers in the other AZ to cope with the increased load there. Since the 'primary' is unreachable those servers can't mount the filesystem, until you configure them to mount the other brick.

How would you prevent that?

By: luc r

Hi Martijn, I think you can get the desired behaviours by providing alternate node names in the mount option named : backupvolfile-server a

By: AnotherLinuxGuy

Where as this will setup Gluster, it's not 100% correct. Libvirt will not work with this configuration reliably, with the result of;

 

libvirtd[4674]: segfault at 7f6888ec9500 ip 00007f688ab8a549 sp 00007f68802036f0 error 4 in \ afr.so[7f688ab40000+6a000]                                                                                                                        

 

libvirtd[4280]: segfault at 7ff5d2c7f440 ip 00007ff621880b66 sp 00007ff5e46cd4c0 error 4 in   \ libglusterfs.so.0.0.1[7ff621831000+d5000]  

By: John Weidauer

I have setup an VM enviro with the same setup as in your walk-through, CentOS 7, Gluster 3.10.3.  I have both servers up, created a volume, which gets created on server 1 and server 2, I touched files from my client with a mount to the volume on server 1, files get created but do not replicate to server 2.

netstat on server 1 lists server 1 and 2 and the client, netstat on server 2 only lists server 2.  I run a gluster heal on the volume on server 1 and I get an error on the server 2 brick, Transport endpoint is not connected, but running the heal on server 2, it connects and reports the number of entries on server 1 (5) and server 2 number of entries as 0, but will not sync.

I have restarted the service, rebooted, I have SELinux disabled, can you provide any help?  I know logs can help, but I'm just looking for a quick response and not for you to diagnose my problem, thanks in advance.

Server 1: netstat -tap | grep glusterfsd

tcp        0      0 0.0.0.0:49152           0.0.0.0:*              LISTEN           3781/glusterfsd     

tcp        0      0 server1:49134          server1:24007       ESTABLISHED 3781/glusterfsd     

tcp        0      0 server1:49152          client1:1020          ESTABLISHED 3781/glusterfsd     

tcp        0      0 server1:49152          server2:49143       ESTABLISHED 3781/glusterfsd     

tcp        0      0 server1:49152          server1:49136       ESTABLISHED 3781/glusterfsd  

Server 2: netstat -tap | grep glusterfsd

tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN          3749/glusterfsd     

tcp        0      0 server2:49152          server2:49149       ESTABLISHED 3749/glusterfsd     

tcp        0      0 server2:49142          server2:24007       ESTABLISHED 3749/glusterfsd

By: William Crosmun

I'm seeing this exact issue. Is there a solution? Or do I need to abandon the idea of using gluster to provide high availability for libvirtd?

By: agung

Why GlusterFS After Restart , Peer Rejected ? this volume start but peer rejected .

By: lee

How to ensure availability when server1 go down ? I must change mount point manually ? Has any way to gluster client machine auto switch no onother gluster server ?

By: Luc

Hi Martijn, I think you can get the desired behaviours by providing alternate node names in the mount option named : backupvolfile-server 

By: Joseph

There is a way using haproxy and keepalived to use a virtual ip.  There are various tutorials via the web.