Be better fault tolerent with another host

Discussion in 'Installation/Configuration' started by Yann, Feb 16, 2016.

  1. Yann

    Yann New Member

    Hello,
    Currently we have the following configuration and would like to be "better fault tolerant" :
    • VMs (all services are in separated VMs) : 2 NS (mirroring), 1 Mail Server, 1 Web Server, 1 MySQL
    • ISPConfig Master is stored on Web
    We would like to be better fault-tolerant. Should we build another Proxmox Host ? We can redirect VMs IP to another host without a lot of problem. Bandwidth is currently limited to 500 Mbit/s between host. No limitation on the VMs. We don't care so much to have not websites up for 5-10 minutes :)

    Thank you very much for your useful advises!
     
  2. ztk.me

    ztk.me Active Member

    To be really fault tollerant you'd need systems at different locations, you could use something like cloudflare.com to redirect traffic ( if routing traffic trough 3rd US party isn't a concern ^^ ).

    Well were to start, there are so many things one could do =) some ideas:
    Setup second host with MySQL replication, sync your web and mail data. You could also use some weaker nginx server for load balancing ( beware of sessions, you'd probably want to use something like redis for php or check apache documentation for sticky sessions, just keep in mind mysql connection data might be "localhost" or something similar. Either use fixed hostnames and change the IP at hosts file or do corresponding routing using iptables.
    Make some testing for cases when systems fail, reconfigure mysql routing / master/slave role.

    Also you could configure the cold MX as backup-MX ...

    Or just leave them as copy and plain reroute traffic.

    Short answer: sure if you can reroute traffic the more hosts you have the better :) Also beware of cronjobs, don't let them be executed twice like some web has a daily cronjob you probably don't want it executed on cold backup server.
    Don't forget to resync back from failover system, there could be some file writing on webs going on or just awstats files and whatnot...

    And I surely haven't coverd everything :)
     
  3. florian030

    florian030 ISPConfig Developer ISPConfig Developer

    you can uses backup-server(s) or run a active/active-setup. in both cases you must replicate the data between the servers. You can use mysql master-master for db, dovecot with dsync for mail and unison for web-files.
     
    DDArt likes this.
  4. ztk.me

    ztk.me Active Member

    Florian, is master-master save to use? I did some testing some years ago ... at first glance it worked pretty stable but after a while in production it went out of sync. I don't remember now if it was a mistake I or my customer did or whatever - really been a while - but what is left in my mind is not to use it except having time for real monitoring and enough spare time to fix issues related to that
     
  5. florian030

    florian030 ISPConfig Developer ISPConfig Developer

    This depends on your setup. I run severl master-master-replications without any problems. If you let ispconfig manage new databases / new db-users, just exclude mysql.* from the replication and configure some slave skip errors.
    On a fresh install i would monitor the sql-replication to investigate in case of any failure. Usually, you can get the replication working again quite fast.
     
  6. ztk.me

    ztk.me Active Member

    yeah I excluded mysql from replication, at that time I didn't use any software to manage the system since none needed. And of course I monitored replication but still it failed which was disappointing. That's why I switched back to master-slave and don't query slaves for time critical insert/select operations even most of the time it works ok but I guess if you really need such things resource wise it's not that simple to acchive reliable.
     
  7. till

    till Super Moderator Staff Member ISPConfig Developer

    What you need is a kind of MySQL replication like Florian pointed out, MySQL master-master is indeed not that stable, I had problems with that too in the past. I've heard that a percona cluster shall be more stable, but I did not had the time to test that yet.
     
  8. florian030

    florian030 ISPConfig Developer ISPConfig Developer

    Huh? I´m running several master-master recplications with no problems over several years. ;)
     
  9. ztk.me

    ztk.me Active Member

    How much write/read traffic do you have? It worked very well and with super fast replication in my test environment.
    But after deploying to customers live-system it failed after a short while. There were many write/read requests depending on each other.
    However it was an early version of MySQL supporting master-master. I did some testing other mysql forks but never went "live" with them, either there were some minor incompatibilites with customers code or especially mariadb broke lots of monitoring plugins like munin where I had to patch a lot of files.
    One customer allowed me to change to Mariadb live ( 5.5 ) but since there was a noticeable performance drop Mysql went back live :)

    But since all of that several years passed - maybe it's time to do some new testing - things change :) But before that I switch from apache mpm-worker to mpm-event :D
     
  10. florian030

    florian030 ISPConfig Developer ISPConfig Developer

    I never measured db read/writes. I have a customer with ~50GB db and ~2.500.000 http-connections per day. Each httpd-connect triggers mysql read or write and this won`t affect the sql-replication.
     
  11. ztk.me

    ztk.me Active Member

    Well it defnitly did when I tested it - after a while, ~20GB db and about 8-10k hits/hour ;) however out-of-sync auto_increment fields are not easy to just let the other master catch on. What MySQL version are you using and which binlog type?
     
  12. florian030

    florian030 ISPConfig Developer ISPConfig Developer

    mixed binlog
     

Share This Page