View Single Post
Old 22nd February 2007, 21:28
Jan VdV Jan VdV is offline
Junior Member
Join Date: Feb 2007
Location: Leuven, Belgium
Posts: 6
Thanks: 0
Thanked 0 Times in 0 Posts
Exclamation Installed nfs cluster, now only 1 connection out of two 2 works

Hello again,

I'm struggling with my mysql cluster which seems to have some hickups.

I try to explain:

I have two load balancers (master - backup) load balancing two webservers. The load balancers load balance also two data servers running a mysql cluster. These two data servers also run a nfs.

I've used the network for the internal network, to the outside I connect to the network of our university 's lab. I also use a network to connect the two load balancers with eachother (which is rather useless but it seemed a good idea at the time).

Three virtual ip addresses have been configured on the load balancers, one for each network.

Via masquerading I connected the webservers so that the 172.16.9.x virtual ip's passes requests on port 80 to the webservers.
I then set up the nfs cluster. Between the two data servers I used heartbeat (according to Falko Timme's tutorial) and that also went fine. The nfs listens to (a virtual ip set up on the two data servers).
All servers (so the 2 webservers and the 2 data servers) use the 192.168.0.x virtual ip address as default gateway. So the load balancers also function as a router for that matter. This all works fine.

Finally I've set up the sql server: the two sql databases are run from the data servers, the management server is installed on the first load balancer (again according to one of Falko Timme's fine tutorials). When I tried to use the 192.168.0.x virtual ip to connect the mysql cluster to, I couldn't get it to work. I presume the masquerading prevents a good connection.
So I've set up the cluster to listen to the virtual 192.168.1.x. I presumed I'd had to use masquerading too since the virtual ip address is on another network as the actual mysql servers, but when nmapping to this virtual address, I saw that port 3306 was filtered.
I then changed the config of the real servers in /etc/ha.d/ from masq to gate and tried again. To my surprise it now stated 'open' instead of 'filtered'.
So I thought all problems were solved. But when making a mysql connection using [/CODE]mysql -u user -h 192.168.1.x -p[/CODE] I only can connect every second time (connecting from one of the webservers via my-sqlclient-4.1).
When disconnecting one of the two nodes the problem seems solved no matter which data server is disconnected.
So actually installing a load balanced mysql cluster decreased availability with 50%?
I also can connect without problems directly to each one of the nodes.

Can anybody help me out? (P.S: my excuses for the elaborate explanation )

Here are some configuration files:
        real= masq
        real= masq
        receive="Test Page"
virtual =
        service = mysql
        real = gate
        real = gate
        checktype = negotiate
        login = "ldirector"
        passwd = "ldirectorpassword"
        database = "ldirectordb"
        request = "SELECT * FROM connectioncheck"
        scheduler = wrr
/etc/network/interfaces from load balancer 1:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static

auto eth1
iface eth1 inet static

auto eth2
iface eth2 inet static
up iptables -t nat -A POSTROUTING -j MASQUERADE -s
down iptables -t nat -D POSTROUTING -j MASQUERADE -s
Reply With Quote
Sponsored Links