How To Set Up A Load-Balanced MySQL Cluster
Author: Falko Timme <ft [at] falkotimme [dot] com>
Last edited 03/27/2006
This tutorial shows how to configure a MySQL 5 cluster with three nodes: two storage nodes and one management node. This cluster is load-balanced by a high-availability load balancer that in fact has two nodes that use the Ultra Monkey package which provides heartbeat (for checking if the other node is still alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster).
In this document I use Debian Sarge for all nodes. Therefore the setup might differ a bit for other distributions. The MySQL version I use in this setup is 5.0.19. If you do not want to use MySQL 5, you can use MySQL 4.1 as well, although I haven't tested it.
This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents in the web.
This document comes without warranty of any kind! I want to say that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
1 My Servers
I use the following Debian servers that are all in the same network (192.168.0.x in this example):
- sql1.example.com: 192.168.0.101 MySQL cluster node 1
- sql2.example.com: 192.168.0.102 MySQL cluster node 2
- loadb1.example.com: 192.168.0.103 Load Balancer 1 / MySQL cluster management server
- loadb2.example.com: 192.168.0.104 Load Balancer 2
In addition to that we need a virtual IP address : 192.168.0.105. It will be assigned to the MySQL cluster by the load balancer so that applications have a single IP address to access the cluster.
Although we want to have two MySQL cluster nodes in our MySQL cluster, we still need a third node, the MySQL cluster management server, for mainly one reason: if one of the two MySQL cluster nodes fails, and the management server is not running, then the data on the two cluster nodes will become inconsistent ("split brain"). We also need it for configuring the MySQL cluster.
So normally we would need five machines for our setup:
2 MySQL cluster nodes + 1 cluster management server + 2 Load Balancers = 5
As the MySQL cluster management server does not use many resources, and the system would just sit there doing nothing, we can put our first load balancer on the same machine, which saves us one machine, so we end up with four machines.
2 Set Up The MySQL Cluster Management Server
First we have to download MySQL 5.0.19 (the max version!) and install the cluster management server (ndb_mgmd) and the cluster management client (ndb_mgm - it can be used to monitor what's going on in the cluster). The following steps are carried out on loadb1.example.com (192.168.0.103):
tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gz
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
rm -rf /usr/src/mysql-mgm
Next, we must create the cluster configuration file, /var/lib/mysql-cluster/config.ini:
# Section for the cluster management node
# IP address of the management node (this system)
# Section for the storage nodes
# IP address of the first storage node
# IP address of the second storage node
# one [MYSQLD] per storage node
Please replace the IP addresses in the file appropriately.
Then we start the cluster management server:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
It makes sense to automatically start the management server at system boot time, so we create a very simple init script and the appropriate startup links:
echo 'ndb_mgmd -f /var/lib/mysql-cluster/config.ini' > /etc/init.d/ndb_mgmd
chmod 755 /etc/init.d/ndb_mgmd
update-rc.d ndb_mgmd defaults