Go Back   HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials > Linux Forums > Server Operation

Do you like HowtoForge? Please consider supporting us by becoming a subscriber.
Reply
 
Thread Tools Display Modes
  #1  
Old 12th December 2009, 21:26
TheTank TheTank is offline
Junior Member
 
Join Date: Dec 2009
Posts: 5
Thanks: 0
Thanked 1 Time in 1 Post
Default High Availability DB-Server

Hello everyone.

I was wondering if someone could help me out with a requirement for a High Availability DB-Server.

I had looked at the DBs built in replication systems but they were just for the DB contents. Everything else around it would need extra effort.

While I had also thought about using virtualization (we also use VBox, but not for HA) the HA requirement would mean the server could only be alive on one machine in the cluster, and if that one fails, pop up (with as little loss as possible) on another one.

The idea being
* if a customer only wants one machine: install the image directly on the host.
* if a customer wants a HA system, use a virtualized setup with a floating server (if that will even work)

The thing is:
a) I was deemed 'the expert' (because they could not find anyone else) to do this but my knowledge is really limited.
b) to somehow have a HA system that is easily installable at a customers location by a 'service engineer' (some setup steps could be automated but in general, the easier the better).

A colleague of my suggested KVM would be just the tool for it, but I have yet to figure out how to get it to magically appear on cluster node 2 if the 1st fails (I am using proxmox based on the howto)
Also Xen was named to have this feature.

I have looked at the HowTos (excellent work btw!) but am unsure what I really need and what not.
Why do I need DRBD f.i.?

thanks in advance!

Last edited by TheTank; 19th March 2010 at 13:34.
Reply With Quote
Sponsored Links
  #2  
Old 19th March 2010, 10:25
TheTank TheTank is offline
Junior Member
 
Join Date: Dec 2009
Posts: 5
Thanks: 0
Thanked 1 Time in 1 Post
Default

Just in case anyone is interested, here is an update.

I have looked into using Debian Lenny, Proxmox, KVM, DRBD and Corosync to allow hot-standby of our systems and have a POC running.
This will be a 2-node setup.

Concept:
We create a partition that will be replicated via DRBD.
Into this we persist our KVM VMs
Heartbeat manages the master/slave setup and starts/stops the VM via a script

On the master node we
1. mount the partition
2. copy over the KVM configs (it expects it in a certain place)
3. elevate this node to master for Proxmox
4. add the mounted partition as a resource to Proxmox
5. start the VM

on the slave exactly the opposite (if it is still running)

Proxmox:
A nice environment with an installer CD and Web UI. It does offer many features out of he box such as migration of VMs, clustering servers and such.

Problem is, in our case we cannot use the CD so I had to revert to installing it manually and then step-by-step building up the system.

Problem 2: Proxmox does not directly support what we intend on doing.

Really rough overview of the steps:
1. Install & configure Debain Lenny amd64
2. Install and configure psmisc samba ntp ssh
3. configure network & hosts file using static IP's
4. Update the repository list for DRBD & Corosnyc (formerly Pacemaker) & PVE
5. install the PVE kernel (pve-kernel-2.6.32) and configure your boot menu
6. installing bridge-utils and bridge the network interfaces
7. install drbd8-modules-[version]-amd64 drbd8-utils (in my case version = 2.6)
WARNING: the Proxmox repo contains a drbd8-utils that is incompatible to the modules, so make sure your versions match!
For me it was 2:8.0.14-2
8. configure & set-up drbd
9. install and configure corosync
with our script mentioned above
10. install and set-up proxmox-ve-[pve kernel version]

If we install a VM that is supposed to be virtualized, we must make sure to install it to the replicated resource.
My script simply went through the list of VMs in that folder, update the config and would then start/stop them.

Open issues:
* for each new VM we add in, we manually have to copy the config to the replicated drive (from /etc/qemu_server), maybe a cronjob can handle this.
* Apache seems to sometimes not want to start after rebooting. A apache2 restart allows me to access the Web GUI again.
* As I mentioned, Proxmox does not really support what we want, though it has been indicated that this might be something for 2.0

If there is a desire to have a (incomplete) tutorial, pls just say so and I'll try to get clearance from my boss.

Currently I have been redirected to something else (similar, but in this case I only replicate PostgreSQL via DRBD&Corosync.. egad, anyone have a clue?) so updates might take a little.

Last edited by TheTank; 19th March 2010 at 10:27.
Reply With Quote
  #3  
Old 24th March 2010, 17:21
TheTank TheTank is offline
Junior Member
 
Join Date: Dec 2009
Posts: 5
Thanks: 0
Thanked 1 Time in 1 Post
 
Default

@Mods:
can someone change the title of this topic as IMHO it has evolved beyond the typical 'pls help me' to a little more informative.

Maybe just remove the 'Advice pls:' please.

thanks
Reply With Quote
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
User unknown in relay recipient table Taxick Installation/Configuration 12 9th April 2013 12:31
Apache High Availability Server Turtorial Issues theev HOWTO-Related Questions 1 2nd October 2008 15:46
Can't start apache Musty Server Operation 12 9th March 2008 13:58
The Perfect Setup - Debian Etch (Debian 4.0) some trouble daniel80 HOWTO-Related Questions 26 1st February 2008 16:30
High Availability Samba cluster - DRBD + Heartbeat djalex Server Operation 58 25th May 2007 19:38


All times are GMT +2. The time now is 21:32.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.