How To Set Up A Load-Balanced MySQL Cluster With MySQL 5.1 - Page 3
5 How To Restart The Cluster
Now let's asume you want to restart the MySQL cluster, for example because you have changed /usr/local/mysql/var/mysql-cluster/config.ini on mysql-mngt.example.com or for some other reason. To do this, you use the ndb_mgm cluster management client on mysql-mngt.example.com:
mysql-mngt.example.com:
ndb_mgm
On the ndb_mgm console, you type
shutdown;
You will then see something like this:
ndb_mgm> shutdown; Node 3: Cluster shutdown initiated Node 2: Node shutdown completed. 2 NDB Cluster node(s) have shutdown. NDB Cluster management server shutdown. ndb_mgm> |
This means that the cluster data nodes mysql-data1.example.com and mysql-data2.example.com and also the cluster management server have shut down.
Run
quit;
to leave the ndb_mgm console.
To start the cluster management server, do this on mysql-mngt.example.com:
mysql-mngt.example.com:
ndb_mgmd -f /usr/local/mysql/mysql-cluster/config.ini
and on mysql-data1.example.com and mysql-data2.example.com you can run
mysql-data1.example.com / mysql-data2.example.com:
ndbd
or, if you have changed /usr/local/mysql/var/mysql-cluster/config.ini on mysql-mngt.example.com:
ndbd --initial
Afterwards, you can check on mysql-mngt.example.com if the cluster has restarted:
mysql-mngt.example.com:
ndb_mgm
On the ndb_mgm console, type
show;
to see the current status of the cluster. It might take a few seconds after a restart until all nodes are reported as connected.
Type
quit;
to leave the ndb_mgm console.
6 Configure The Load Balancers
Our MySQL cluster is finished now, and you could start using it now. However, we don't have a single IP address that we can use to access the cluster, which means you must configure your applications in a way that a part of it uses the MySQL cluster node 1 (mysql-data1.example.com), and the rest uses the other node (mysql-data2.example.com). Of course, all your applications could just use one node, but what's the point then in having a cluster if you do not split up the load between the cluster nodes? Another problem is, what happens if one of the cluster nodes fails? Then the applications that use this cluster node cannot work anymore.
The solution is to have a load balancer in front of the MySQL cluster which (as its name suggests) balances the load between the MySQL cluster nodes. The load blanacer configures a virtual IP address that is shared between the cluster nodes, and all your applications use this virtual IP address to access the cluster. If one of the nodes fails, then your applications will still work, because the load balancer redirects the requests to the working node.
Now in this scenario the load balancer becomes the bottleneck. What happens if the load balancer fails? Therefore we will configure two load balancers (mysql-lb1.example.com and mysql-b2.example.com) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. Both load balancers use heartbeat to check if the other load balancer is still alive, and both load balancers also use ldirectord, the actual load balancer the splits up the load onto the cluster nodes. heartbeat and ldirectord are provided by the Ultra Monkey package that we will install.
It is important that mysql-lb1.example.com and mysql-lb2.example.com have support for IPVS (IP Virtual Server) in their kernels. IPVS implements transport-layer load balancing inside the Linux kernel.
6.1 Install Ultra Monkey
Ok, let's start: first we enable IPVS on mysql-lb1.example.com and mysql-lb2.example.com:
mysql-lb1.example.com / mysql-lb2.example.com:
modprobe ip_vs_dh
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe ip_vs_nq
modprobe ip_vs_rr
modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr
In order to load the IPVS kernel modules at boot time, we list the modules in /etc/modules:
mysql-lb1.example.com / mysql-lb2.example.com:
vi /etc/modules
ip_vs_dh ip_vs_ftp ip_vs ip_vs_lblc ip_vs_lblcr ip_vs_lc ip_vs_nq ip_vs_rr ip_vs_sed ip_vs_sh ip_vs_wlc ip_vs_wrr |
Now we edit /etc/apt/sources.list and add the Ultra Monkey repositories (don't remove the other repositories), and then we install Ultra Monkey:
mysql-lb1.example.com / mysql-lb2.example.com:
vi /etc/apt/sources.list
deb http://www.ultramonkey.org/download/3/ sarge main deb-src http://www.ultramonkey.org/download/3 sarge main |
apt-get update
apt-get install ultramonkey libdbi-perl libdbd-mysql-perl libmysqlclient14-dev
Now Ultra Monkey is being installed. If you see this warning:
¦ libsensors3 not functional ¦ ¦ ¦ ¦ It appears that your kernel is not compiled with sensors support. As a ¦ ¦ result, libsensors3 will not be functional on your system. ¦ ¦ ¦ ¦ If you want to enable it, have a look at "I2C Hardware Sensors Chip ¦ ¦ support" in your kernel configuration. ¦ |
you can ignore it.
Answer the following questions:
Do you want to automatically load IPVS rules on boot?
<-- No
Select a daemon method.
<-- none
The libdbd-mysql-perl package we've just installed does not work with MySQL 5 (we use MySQL 5 on our MySQL cluster...), so we install the newest DBD::mysql Perl package:
mysql-lb1.example.com / mysql-lb2.example.com:
cd /tmp
wget http://search.cpan.org/CPAN/authors/id/C/CA/CAPTTOFU/DBD-mysql-3.0002.tar.gz
tar xvfz DBD-mysql-3.0002.tar.gz
cd DBD-mysql-3.0002
perl Makefile.PL
make
make install
We must enable packet forwarding:
mysql-lb1.example.com / mysql-lb2.example.com:
vi /etc/sysctl.conf
# Enables packet forwarding net.ipv4.ip_forward = 1 |
sysctl -p
6.2 Configure heartbeat
Next we configure heartbeat by creating three files (all three files must be identical on mysql-lb1.example.com and mysql-lb2.example.com):
mysql-lb1.example.com / mysql-lb2.example.com:
vi /etc/ha.d/ha.cf
logfacility local0 bcast eth0 mcast eth0 225.0.0.1 694 1 0 auto_failback off node mysql-lb1 node mysql-lb2 respawn hacluster /usr/lib/heartbeat/ipfail apiauth ipfail gid=haclient uid=hacluster |
Please note: you must list the node names (in this case mysql-lb1 and lmysql-lb2) as shown by
uname -n
IP addresses does not work here, it is also good idea to add proper entries on both load balancers(mysql-lb1 and lmysql-lb2) in
vi /etc/hosts
127.0.0.1 localhost 10.0.1.31 mysql-lb1.example.com mysql-lb1 10.0.1.32 mysql-lb2.example.com mysql-lb2 |
Now let's change here your Virtual MySQL IP:
vi /etc/ha.d/haresources
loadb1 \ ldirectord::ldirectord.cf \ LVSSyncDaemonSwap::master \ IPaddr2::10.0.1.10/24/eth0/10.0.1.255 |
You must list one of the load balancer node names (here: mysql-lb1) and list the virtual IP address (10.0.1.10) together with the correct netmask (24) and broadcast address (10.0.1.255). If you are unsure about the correct settings, http://www.subnetmask.info/ might help you.
vi /etc/ha.d/authkeys
auth 3 3 md5 somerandomstring |
somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.
/etc/ha.d/authkeys should be readable by root only, therefore we do this:
mysql-lb1.example.com / mysql-lb2.example.com:
chmod 600 /etc/ha.d/authkeys
6.3 Configure ldirectord
Now we create the configuration file for ldirectord, the load balancer:
mysql-lb1.example.com / mysql-lb2.example.com:
vi /etc/ha.d/ldirectord.cf
# Global Directives checktimeout=10 checkinterval=2 autoreload=no logfile="local0" quiescent=yes virtual = 10.0.1.10:3306 service = mysql real = 10.0.1.33:3306 gate real = 10.0.1.34:3306 gate checktype = negotiate login = "ldirector" passwd = "ldirectorpassword" database = "ldirectordb" request = "SELECT * FROM connectioncheck" scheduler = wrr |
Please fill in the correct virtual IP address (10.0.1.10) and the correct IP addresses of your MySQL cluster nodes (10.0.1.33 and 10.0.1.34). 3306 is the port that MySQL runs on by default. We also specify a MySQL user (ldirector) and password (ldirectorpassword), a database (ldirectordb) and an SQL query. ldirectord uses this information to make test requests to the MySQL cluster nodes to check if they are still available. We are going to create the ldirector database with the ldirector user in the next step.
Now we create the necessary system startup links for heartbeat and remove those of ldirectord (bacause ldirectord will be started by heartbeat):
mysql-lb1.example.com / mysql-lb2.example.com:
update-rc.d -f heartbeat remove
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove
6.4 Create A Database Called ldirector
Next we create the ldirector database on our MySQL cluster nodes mysql-data1.example.com and mysql-data2.example.com. This database will be used by our load balancers to check the availability of the MySQL cluster nodes.
mysql-data1.example.com:
mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
USE ldirectordb;
CREATE TABLE connectioncheck (I INT) ENGINE=NDBCLUSTER;
INSERT INTO connectioncheck () VALUES (1);
quit;
mysql-data2.example.com:
mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
quit;
6.5 Prepare The MySQL Cluster Nodes For Load Balancing
Finally we must configure our MySQL cluster nodes mysql-data1.example.com and mysql-data2.example.com to accept requests on the virtual IP address 192.168.0.105.
mysql-data1.example.com / mysql-data2.example.com:
apt-get install iproute
Add the following to /etc/sysctl.conf:
mysql-data1.example.com / mysql-data2.example.com:
vi /etc/sysctl.conf
# Enable configuration of arp_ignore option net.ipv4.conf.all.arp_ignore = 1 # When an arp request is received on eth0, only respond if that address is # configured on eth0. In particular, do not respond if the address is # configured on lo net.ipv4.conf.eth0.arp_ignore = 1 # Ditto for eth1, add for all ARPing interfaces #net.ipv4.conf.eth1.arp_ignore = 1 # Enable configuration of arp_announce option net.ipv4.conf.all.arp_announce = 2 # When making an ARP request sent through eth0 Always use an address that # is configured on eth0 as the source address of the ARP request. If this # is not set, and packets are being sent out eth0 for an address that is on # lo, and an arp request is required, then the address on lo will be used. # As the source IP address of arp requests is entered into the ARP cache on # the destination, it has the effect of announcing this address. This is # not desirable in this case as adresses on lo on the real-servers should # be announced only by the linux-director. net.ipv4.conf.eth0.arp_announce = 2 # Ditto for eth1, add for all ARPing interfaces #net.ipv4.conf.eth1.arp_announce = 2 |
sysctl -p
Add this section for the virtual IP address to /etc/network/interfaces:
mysql-data1.example.com / mysql-data2.example.com:
vi /etc/network/interfaces
auto lo:0 iface lo:0 inet static address 10.0.1.10 netmask 255.255.255.255 pre-up sysctl -p > /dev/null |
ifup lo:0