Migrating ISPConfig to new infrastructure - Advice needed

Discussion in 'ISPConfig 3 Priority Support' started by stefanm, May 9, 2019.

  1. stefanm

    stefanm Member HowtoForge Supporter

    Hi all,
    sorry if this is the wrong forum for such a question (Till, Falko, please move the thread if it doesn't fit here). I am planning to move my current ispconfig server setup to new servers and while doing that change the overall structure.
    What I have now:
    Two dedicated servers. The first one is the actual server running ispconfig, webs, databases, email server. The second one is an ispconfig slave server as cold standby in case that the first server fails. The database are synced via Mariadb replication, mails are sync'ed with dovecot dsync, web folders are sync'ed with a custom script running rsync. The switch between the servers is done via a failover ip.
    This actually works pretty well, but all the syncing stuff causes a lot of overhead and multiple points of failures, accompanied with an ever present uneasy feeling that something in the syncing didn't work as expected but went unnoticed.
    In some other thread either Till or Falko (sorry, cannot remember) suggested using a virtualized environment instead... which totally makes sense, so I am now wracking my brain about the proper setup and I could really use some advice from people already using a virtualized multi-server setup about best-practices and common fails.

    The overall idea is:
    Two new servers which are both active.
    First server hosts the ispconfig panel in a separate VM and the mail server in a separate VM.
    Second server hosts two different web server VMs. One main web server and one separate VM for running some websites which need special care in terms of maintenance. All these VMs are ispconfig slaves.
    Additionally I would like to push snapshots of the VM between the two servers, so I can rather quickly power up a VM on the other server in case one server fails. I am aware that without a shared storage there will always be a certain time gap that might lead to data loss in case there is something really fatal happening on one of the servers. All the sites are low-traffic with very unfrequent changes.

    My first idea was setting up Proxmox on both the machines with a ZFS storage and replicate between the servers via pve-zsync, but there seem to be some caveats, so I wonder if there are better solutions:
    * ZFS seems to require quite some memory, so I am sceptical about the performance penalty it might impose
    * There seem to be problems with booting a zfs mirror from nvme ssds
    * It seems one has to be very carefull about configuring ZFS for SSDs, otherwise the SSDs might wear off quickly
    (this is what a got from reading numerous pages about ZFS, so this might be inaccurate or outdated).

    Till / Falko suggested a much simplier setup with only a minimal system and the bare virtualizer, so you could simply reinstall a whole machine within short time and recover the VMs (which also makes totally sense). But which technolgies would be sufficient to take snapshots in regular intervalls and push these between the servers?

    And what virtualization technology is preferreable? KVM or lxc? I think lxc would be faster and I think you could even use something like rsync to push the VM to the other server? Network setup on the other side might be easier with KVM I guess, while KVM is slower and at least when working with disk image files need another technology for syncing? And what about scaling?
    I must confess I am not sure how the virtualisation is implemented? When using KVM each VM running Apache has its own separate Apache process. But how is this done in containers, if the host is also running Apache? Are these still separate processes or is this somehow done via threading or alike such that the container processes are attached to the Apache process of the host. If that's the case I think the shere number of threads might limit the scaling possibilities? (but of course there might be no problem at all and I am just thinking about non-existent problems)

    Does anybody here use such a virtualized multi-server envirnment and could give some advice?
    Thanks for your help and taking the time to read all of the above!

    P.S. Does the Migration tool allow to migrate selectivly? Last time time used the tool I just migrated from old hardware to new hardware but kept the overall structure. From several posts in the forums I would conclude that the migration tool is capable of selective migration, but I am not sure...
  2. Taleman

    Taleman Well-Known Member HowtoForge Supporter

    Migration tool can not migrate part of the setup. But new and old need not have same number of servers in the cluster, you can migrate a single server setup to multi server setup or vice versa.
  3. muekno

    muekno Member HowtoForge Supporter

    Wouldn't the copy toll do that?
    Till now I used the copy tool to migrate servers von i.e. debian 8 to debian 9 complete. but by reminding the readme parts schould be possible
  4. till

    till Super Moderator Staff Member ISPConfig Developer

    The current versions of the Migration Tool (Migration Toolkit 2.x) are able to migrate a system selectively, so you can just migrate a specific website, database or email domain or a specific client which all his assets. When the target is a multiserver system, then you can choose on which node the migrated data shall end up.

    The only thing that the migration tool can't do is that you move a website within the same multiserver setup. e.g. you have an existing multiserver setup where web servers web1 and web2 are attached to the same master, if you want to move a website from web1 to web2 now, then this is not possible with the tool yet as both servers web1 and web2 are connected to the same master.

    Regarding ZFS: I don't have any experience with that filesystem type, but what I know is that you can't have quota in websites in ISPConfig if you sue it as ZFS quota is different from Linux filesystem quota and ISPConfig supports Linux filesystem quota only.

    In regard to virtualization technologies: I would prefer LXC too as it is more lightweight, but you should be aware that quota in websites will probably not work in LXC. So if you need website quotas, better use KVM.
    Jesse Norell likes this.
  5. stefanm

    stefanm Member HowtoForge Supporter

    Many thanks for your insights. This is great news about the Migration tool.
    I guess I can go with lxc, I do use quota at the moment, but they are not a breaking feature.
    So, the biggest question still is how to replicate snapshots of the virtual servers to a second machine, say what filesystem to use and which replication tool. The most critical part about this is probably taking a save snapshot of the virtual servers with database servers.
    Hmm, anyone using a replication setup based on snapshots?
  6. muekno

    muekno Member HowtoForge Supporter

    May be I am wrong. As I understand you look for redundancy. So let us think a little bit unconventual and let me tell about what I do sind more then 20 years. First I use used professional servers, used as they are cheap also I use used testet hard disks. the are a lot cheap hardware on the market running only short time (log before real end of live) some out of support or replaced why a bigger machine was needed or out of leasing etc. But this hardware runs fine for many more years. For virtualisation I use VMware ESXi it si free for on server missing some futures I do not need. VMware as ist ist reliable an easy to handle and ESXi virtual machines are compatible with VWware workstation virtual machines for testing an so on. I use mirroring, no raid uses a little bit more harddisk space but gives a little bit more performance. And I have an identical spare hardware without disks and some spare disks. So if a harddisk fails system runs fine, I can hot swap to a new hard disc without any downtime. I even could configure spare disks automaticly takeing over till I replace the fail one. If the server itself breaks (I never had that in more than 30 years), I can quickly replace broken parts from the spare server or the quicker way to move the hard discs from the broken server to the spare server and everything is up and running (remember identical hardware, same controller especially hard disc controller, same bios version and so on) I testet it but never needed. On the other side virtual machines are quick moved to new systems. So from time to time (every 3 to 5 years) when, my hardware comes into year newer hardware becomes cheap I set up a new server move the virtual machines (downtime just the copy time) and every thing is up again (did it just 2 month ago without any problem). Updateing to new OS, I did a new setup and moved data, but now I I user the copy tool which is muss quicker, less downtime and easier. And I make backups before risky things.
    And I use a virtual multiserver ISPconfig system, so working on one virtual server does not affect every service.
    Conclusion, I need a reliable system with mostly no downtime, but planed downtime on weekend or at night for some hours is acceptable. Unplaned downtime (I had'nt till now) it is acceptable if less then 2 to 3 hours. I do not need the newest an most performing hardware. With my strategy I went fine for many many years, I saved a lot of money for hardware. Shure that would ne no strategy for a high professional site, a highly frequented webshop, an big company but far a lot of private, semiprofessional and small business sites it is ok gives high reliability and saves a lot of money.

    Regarding ZFS, it is a genious file system compareable with Novells NSS or BTRFS, same philosophy, but do you really need it. I user ext4 now, but all above would work with ZFS too.

Share This Page