Comments on High-Availability Storage with GlusterFS on Debian 8 - Mirror across two storage servers

This tutorial shows how to set up a high-availability storage with two storage servers (Debian Jessie) that uses GlusterFS. Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage nodes. The client system (Debian 8 as well) will be able to access the storage as if it was a local filesystem.

12 Comment(s)

Add comment

Please register in our forum first to comment.

Comments

By: Fox

What happens if you moubt from server1 and it dies? Are you still able to mount from it or do you have to manually mount from server2?

By: till

The glusterfs client switches to server 2 automatically when he can't connect to server 1 anymore.

By: michael

In your howto you instruct the reader to use this mount point:

server1.example.com:/testvol /mnt/glusterfs glusterfs defaults,_netdev 0 0

Later you test replication by shutting down server1 and create new files in /mnt/glusterfs. How is this possible when server1 is shut down?

By: till

Please see my answer above to the comment from Fox. Glusterfs handles this autimatically.

By: Joseph

Great tut. Works perfectly! GlusterFS performance is relatively poor through when compared to other FUSE systems like SSHFS. Performance on a distributed-replicated SSD GlusterFS storage was just 4-5 MBps whereas on SSHFS SATA HDD it averages 30-40 MBps.

By: Marco

Do I have to use a client? If I touch a file on server1, does it get replicated on server2 (and vice-versa)?

By: Mike

Hi Marco,

I'm having the same question/problem as yours.

Setup server1, server2, client1 (the instruction works flawlessly, thanks a bunch). - Sync-ing from client1 -> server1, server2 ==> working- Sync-ing from server2 -> server1, client1 ==> working- Sync-ing from server1 -> server2, client1 ==> NOT working

I'm not sure if i got the technical concept wrong (and that this is actually the correct behaviour). If someone can help clarifying this, it will be great.

Thanks again for the great config guide.

 

By: nikola

This doesn't work. I have 2 Gluster servers: gluster1.test.com and gluster2.test.com

My mount command in "/etc/fstab" is "/gluster1.test.com:/testvol /mnt/glusterfs glusterfs defaults,_netdev 0 0"

When I turn off the server "gluster1.test.org", the client shows error.$ls /mnt/glusterfs/ls: cannot access /mnt/glusterfs/: Transport endpoint is not connected

By: Erik

The default failover timeout is about 42 seconds. To have a direct failover set the timeout of the volume to 1 second.

To do this type: gluster volume set {VOLUMENAME} ping-timeout 1

By: Fabrizio Salmi

If you encounter the apt-transport-https error just get it with:

apt-get install apt-transport-https 

and then apt-get update

Hope it helps folks!

 

By: Jose Manuel Ruiz Baena

I have problems with the configuration in debian 8.8 with IPv6. To solve the problem I force the creations of "volume" and "peer probe" with IPv4 then the operation is correct.

By: Peter

Question...

I'm looking for a multi PB enviroment and I think GlusterFS will/can solve this for me.

Are there any limitations/best practices on Brick- and volume-sizes?