glusterfs for www and mail spool

Discussion in 'Developers' Forum' started by ispcomm, Nov 22, 2013.

  1. ispcomm

    ispcomm Member

    I am sorry for posting this in the developers section. I feel the "General" or "Install/Config" sections are not on par with this. It's probably not the right forum as it's not directly related to ispconfig, but I'll give it a shot.

    I am wondering if anybody is running glusterfs based web pool or also maildir pool.

    What is the performance that you get compared to, let say, normal nfs ?

    I've been looking at gluster over the years, but have never used it for more than my backups (which are big gzipped tar files).

    Now I'm considering the idea of ditching some of my san stuff to build a linearly scalable back-end storage for hosting the web sites and the dovecot maildirs.

    I tried gluster in the past but the performance was terrible (that was before the gluster v3 series).

    Is it any better now?
  2. till

    till Super Moderator Staff Member ISPConfig Developer

    Glusterfs gets very slow when you have many small files like usually in /var/www. I've used it for cluster setups, but it does not work well.

    What might work is that you create e.g. a big loop device file on a glsterfs volume nad then mount this as /var/www. But havent tested that yet.
  3. florian030

    florian030 ISPConfig Developer ISPConfig Developer

    You can try DRBD with the OCFS2-Filesystem. For me it works much better than glusterFS. The performance is ok. But you might see io-load-peaks when backing-up data on an volume on both servers the same time.
  4. ispcomm

    ispcomm Member

    florian: thanks for suggesting this. However it won't work in my case. DRBD is ok for two node setups, but I'm looking at replacing a nas setup based on nfs which is serving mutiple clients (multiple load balanced web servers and multiple toaster servers).

    I'm at the point where nfs becomes the bottleneck and I've already split the web roots from the mail spool dirs. The next big thing would be to try to scale linearly the shared storage, or forget everything and start doing what everybody does: local storage + rsync replication.

    Till: did you use gluster with the fuse mount, or did you use the native nfs server? The fuse mount which was mandatory in V2 is what I was disappointed with.
  5. ispcomm

    ispcomm Member

    Here's a bonnie++ benchmark (I use bonnie a lot) on a fuse-mounted gluster V3.4 mount. The setup is actually very low-end. 2 servers, connected with a 100M network.

    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    S1		       512M           10661  11  9695  12           49619  13 523.8  34
    Latency                         445ms     870ms             75088us    2425ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    S1                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16    30   0  1710   6   198   2    38   0   443   2   140   2
    Latency              3208ms   10994us     601ms    3241ms   20206us    2085ms
    I'm using the seeks/sec as a proxy for iops. The random file create is also another proxy for the iops. I must to this test with nfs mount. stay tuned :)

    Update: sorry... but the nfs bonnie test failed miserably at the sequential write test. It locked up the nfs-mounted glusterfs and could not be killed (kill -9 had noeffect). I had to reboot the server and this fed me enough, so I'm done for the day.
    Last edited: Nov 23, 2013
  6. till

    till Super Moderator Staff Member ISPConfig Developer

    I tried only fuse mounts.

Share This Page