How To Set Up A Load-Balanced MySQL Cluster With MySQL 5.1 - Page 3

5 How To Restart The Cluster

Now let's asume you want to restart the MySQL cluster, for example because you have changed /usr/local/mysql/var/mysql-cluster/config.ini on or for some other reason. To do this, you use the ndb_mgm cluster management client on


On the ndb_mgm console, you type


You will then see something like this:

ndb_mgm> shutdown;
Node 3: Cluster shutdown initiated
Node 2: Node shutdown completed.
2 NDB Cluster node(s) have shutdown.
NDB Cluster management server shutdown.

This means that the cluster data nodes and and also the cluster management server have shut down.



to leave the ndb_mgm console.

To start the cluster management server, do this on

ndb_mgmd -f /usr/local/mysql/mysql-cluster/config.ini  

and on and you can run /


or, if you have changed /usr/local/mysql/var/mysql-cluster/config.ini on

ndbd --initial

Afterwards, you can check on if the cluster has restarted:


On the ndb_mgm console, type


to see the current status of the cluster. It might take a few seconds after a restart until all nodes are reported as connected.



to leave the ndb_mgm console.


6 Configure The Load Balancers

Our MySQL cluster is finished now, and you could start using it now. However, we don't have a single IP address that we can use to access the cluster, which means you must configure your applications in a way that a part of it uses the MySQL cluster node 1 (, and the rest uses the other node ( Of course, all your applications could just use one node, but what's the point then in having a cluster if you do not split up the load between the cluster nodes? Another problem is, what happens if one of the cluster nodes fails? Then the applications that use this cluster node cannot work anymore.

The solution is to have a load balancer in front of the MySQL cluster which (as its name suggests) balances the load between the MySQL cluster nodes. The load blanacer configures a virtual IP address that is shared between the cluster nodes, and all your applications use this virtual IP address to access the cluster. If one of the nodes fails, then your applications will still work, because the load balancer redirects the requests to the working node.

Now in this scenario the load balancer becomes the bottleneck. What happens if the load balancer fails? Therefore we will configure two load balancers ( and in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. Both load balancers use heartbeat to check if the other load balancer is still alive, and both load balancers also use ldirectord, the actual load balancer the splits up the load onto the cluster nodes. heartbeat and ldirectord are provided by the Ultra Monkey package that we will install.

It is important that and have support for IPVS (IP Virtual Server) in their kernels. IPVS implements transport-layer load balancing inside the Linux kernel.


6.1 Install Ultra Monkey

Ok, let's start: first we enable IPVS on and /

modprobe ip_vs_dh
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe ip_vs_nq
modprobe ip_vs_rr
modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr

In order to load the IPVS kernel modules at boot time, we list the modules in /etc/modules: /

vi /etc/modules

Now we edit /etc/apt/sources.list and add the Ultra Monkey repositories (don't remove the other repositories), and then we install Ultra Monkey: /

vi /etc/apt/sources.list
deb sarge main
deb-src sarge main

apt-get update
apt-get install ultramonkey libdbi-perl libdbd-mysql-perl libmysqlclient14-dev

Now Ultra Monkey is being installed. If you see this warning:

  ¦ libsensors3 not functional                                               ¦
  ¦                                                                          ¦
  ¦ It appears that your kernel is not compiled with sensors support. As a   ¦
  ¦ result, libsensors3 will not be functional on your system.               ¦
  ¦                                                                          ¦
  ¦ If you want to enable it, have a look at "I2C Hardware Sensors Chip      ¦
  ¦ support" in your kernel configuration.                                   ¦

you can ignore it.

Answer the following questions:

Do you want to automatically load IPVS rules on boot?
<-- No

Select a daemon method.
<-- none

The libdbd-mysql-perl package we've just installed does not work with MySQL 5 (we use MySQL 5 on our MySQL cluster...), so we install the newest DBD::mysql Perl package: /

cd /tmp
tar xvfz DBD-mysql-3.0002.tar.gz
cd DBD-mysql-3.0002
perl Makefile.PL
make install

We must enable packet forwarding: /

vi /etc/sysctl.conf
# Enables packet forwarding
net.ipv4.ip_forward = 1
sysctl -p 


6.2 Configure heartbeat

Next we configure heartbeat by creating three files (all three files must be identical on and /

vi /etc/ha.d/
logfacility        local0
bcast        eth0
mcast eth0 694 1 0
auto_failback off
node        mysql-lb1
node        mysql-lb2
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster

Please note: you must list the node names (in this case mysql-lb1 and lmysql-lb2) as shown by

uname -n 

IP addresses does not work here, it is also good idea to add proper entries on both load balancers(mysql-lb1 and lmysql-lb2) in

vi /etc/hosts       localhost    mysql-lb1    mysql-lb2

Now let's change here your Virtual MySQL IP:

vi /etc/ha.d/haresources
loadb1        \ \
        LVSSyncDaemonSwap::master \

You must list one of the load balancer node names (here: mysql-lb1) and list the virtual IP address ( together with the correct netmask (24) and broadcast address ( If you are unsure about the correct settings, might help you.

vi /etc/ha.d/authkeys
auth 3
3 md5 somerandomstring

somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.

/etc/ha.d/authkeys should be readable by root only, therefore we do this: /

chmod 600 /etc/ha.d/authkeys


6.3 Configure ldirectord

Now we create the configuration file for ldirectord, the load balancer: /

vi /etc/ha.d/
# Global Directives
virtual =
        service = mysql
        real = gate
        real = gate
        checktype = negotiate
        login = "ldirector"
        passwd = "ldirectorpassword"
        database = "ldirectordb"
        request = "SELECT * FROM connectioncheck"
        scheduler = wrr

Please fill in the correct virtual IP address ( and the correct IP addresses of your MySQL cluster nodes ( and 3306 is the port that MySQL runs on by default. We also specify a MySQL user (ldirector) and password (ldirectorpassword), a database (ldirectordb) and an SQL query. ldirectord uses this information to make test requests to the MySQL cluster nodes to check if they are still available. We are going to create the ldirector database with the ldirector user in the next step.

Now we create the necessary system startup links for heartbeat and remove those of ldirectord (bacause ldirectord will be started by heartbeat): /

update-rc.d -f heartbeat remove
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove


6.4 Create A Database Called ldirector

Next we create the ldirector database on our MySQL cluster nodes and This database will be used by our load balancers to check the availability of the MySQL cluster nodes.

mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
CREATE DATABASE ldirectordb;
USE ldirectordb;
INSERT INTO connectioncheck () VALUES (1);

mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
CREATE DATABASE ldirectordb;


6.5 Prepare The MySQL Cluster Nodes For Load Balancing

Finally we must configure our MySQL cluster nodes and to accept requests on the virtual IP address /

apt-get install iproute

Add the following to /etc/sysctl.conf: /

vi /etc/sysctl.conf
# Enable configuration of arp_ignore option
net.ipv4.conf.all.arp_ignore = 1

# When an arp request is received on eth0, only respond if that address is
# configured on eth0. In particular, do not respond if the address is
# configured on lo
net.ipv4.conf.eth0.arp_ignore = 1

# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_ignore = 1

# Enable configuration of arp_announce option
net.ipv4.conf.all.arp_announce = 2

# When making an ARP request sent through eth0 Always use an address that
# is configured on eth0 as the source address of the ARP request.  If this
# is not set, and packets are being sent out eth0 for an address that is on
# lo, and an arp request is required, then the address on lo will be used.
# As the source IP address of arp requests is entered into the ARP cache on
# the destination, it has the effect of announcing this address.  This is
# not desirable in this case as adresses on lo on the real-servers should
# be announced only by the linux-director.
net.ipv4.conf.eth0.arp_announce = 2

# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_announce = 2
sysctl -p

Add this section for the virtual IP address to /etc/network/interfaces: /

vi /etc/network/interfaces
auto lo:0
iface lo:0 inet static
  pre-up sysctl -p > /dev/null
ifup lo:0
Share this page:

1 Comment(s)