Installed nfs cluster, now only 1 connection out of two 2 works
I'm struggling with my mysql cluster which seems to have some hickups.
I try to explain:
I have two load balancers (master - backup) load balancing two webservers. The load balancers load balance also two data servers running a mysql cluster. These two data servers also run a nfs.
I've used the 192.168.0.0/24 network for the internal network, to the outside I connect to the 172.16.9.0/24 network of our university 's lab. I also use a 192.168.1.0/24 network to connect the two load balancers with eachother (which is rather useless but it seemed a good idea at the time).
Three virtual ip addresses have been configured on the load balancers, one for each network.
Via masquerading I connected the webservers so that the 172.16.9.x virtual ip's passes requests on port 80 to the webservers.
I then set up the nfs cluster. Between the two data servers I used heartbeat (according to Falko Timme's tutorial) and that also went fine. The nfs listens to 192.168.0.20 (a virtual ip set up on the two data servers).
All servers (so the 2 webservers and the 2 data servers) use the 192.168.0.x virtual ip address as default gateway. So the load balancers also function as a router for that matter. This all works fine.
Finally I've set up the sql server: the two sql databases are run from the data servers, the management server is installed on the first load balancer (again according to one of Falko Timme's fine tutorials). When I tried to use the 192.168.0.x virtual ip to connect the mysql cluster to, I couldn't get it to work. I presume the masquerading prevents a good connection.
So I've set up the cluster to listen to the virtual 192.168.1.x. I presumed I'd had to use masquerading too since the virtual ip address is on another network as the actual mysql servers, but when nmapping to this virtual address, I saw that port 3306 was filtered.
I then changed the config of the real servers in /etc/ha.d/ldirectord.cf from masq to gate and tried again. To my surprise it now stated 'open' instead of 'filtered'.
So I thought all problems were solved. But when making a mysql connection using [/CODE]mysql -u user -h 192.168.1.x -p[/CODE] I only can connect every second time (connecting from one of the webservers via my-sqlclient-4.1).
When disconnecting one of the two nodes the problem seems solved no matter which data server is disconnected.
So actually installing a load balanced mysql cluster decreased availability with 50%?
I also can connect without problems directly to each one of the nodes.
Can anybody help me out? (P.S: my excuses for the elaborate explanation ;) )
Here are some configuration files:
I have located the problem, so this thread is closed.
Could you, for the archives, indicate how you solved the problem or what the cause was?
I will post my solution tomorrow or the day after tomorrow. I have to make a tutorial so that other student are able to reconstruct my whole setup, which I'm going to do this weekend.
The problems of my setup came from the combination of the sql cluster, the apache cluster and a nfs.
This problem in particuliar was caused by two lines of code in /etc/network/interfaces:
|All times are GMT +2. The time now is 10:26.|
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.