How To Set Up A Load-Balanced MySQL Cluster - Page 6

6.2 Configure heartbeat

Next we configure heartbeat by creating three files (all three files must be identical on loadb1.example.com and loadb2.example.com):

loadb1.example.com / loadb2.example.com:

vi /etc/ha.d/ha.cf

logfacility        local0
bcast eth0
mcast eth0 225.0.0.1 694 1 0
auto_failback off
node loadb1
node loadb2
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster

Please note: you must list the node names (in this case loadb1 and loadb2) as shown by

uname -n

Other than that, you don't have to change anything in the file.

vi /etc/ha.d/haresources

loadb1        \
ldirectord::ldirectord.cf \
LVSSyncDaemonSwap::master \
IPaddr2::192.168.0.105/24/eth0/192.168.0.255

You must list one of the load balancer node names (here: loadb1) and list the virtual IP address (192.168.0.105) together with the correct netmask (24) and broadcast address (192.168.0.255). If you are unsure about the correct settings, http://www.subnetmask.info/ might help you.

vi /etc/ha.d/authkeys

auth 3
3 md5 somerandomstring

somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.

/etc/ha.d/authkeys should be readable by root only, therefore we do this:

loadb1.example.com / loadb2.example.com:

chmod 600 /etc/ha.d/authkeys


6.3 Configure ldirectord

Now we create the configuration file for ldirectord, the load balancer:

loadb1.example.com / loadb2.example.com:

vi /etc/ha.d/ldirectord.cf

# Global Directives
checktimeout=10
checkinterval=2
autoreload=no
logfile="local0"
quiescent=yes

virtual = 192.168.0.105:3306
service = mysql
real = 192.168.0.101:3306 gate
real = 192.168.0.102:3306 gate
checktype = negotiate
login = "ldirector"
passwd = "ldirectorpassword"
database = "ldirectordb"
request = "SELECT * FROM connectioncheck"
scheduler = wrr

Please fill in the correct virtual IP address (192.168.0.105) and the correct IP addresses of your MySQL cluster nodes (192.168.0.101 and 192.168.0.102). 3306 is the port that MySQL runs on by default. We also specify a MySQL user (ldirector) and password (ldirectorpassword), a database (ldirectordb) and an SQL query. ldirectord uses this information to make test requests to the MySQL cluster nodes to check if they are still available. We are going to create the ldirector database with the ldirector user in the next step.

Now we create the necessary system startup links for heartbeat and remove those of ldirectord (bacause ldirectord will be started by heartbeat):

loadb1.example.com / loadb2.example.com:

update-rc.d -f heartbeat remove
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove

Share this page:

19 Comment(s)

Add comment

Comments

From: Anonymous at: 2006-03-27 18:10:41

This is rather unfortunate, but without foreign keys support and memory-only storage MySQL cluster is not a really viable solution for most RDBMS users.

From: Anonymous at: 2006-03-28 22:06:44

The InnoDB engine supports foriegn keys and works with the MySQL cluster so your comment is incorrect sir.

From: Anonymous at: 2006-03-31 07:26:23

Actually you are wrong mate. While InnoDB in MySQL supports foriegn keys, you can not use InnoDB while setting up the MySQL Cluster support described in this article. You can only use the NDB backend which is a simplified version of MyISAM. It also has the limitation of being completely memory resident.

From: Anonymous at: 2006-05-28 17:22:00

NDB Cluster has *nothing* to do with MyISAM. NDB has a long history outside of MySQL, and it has no relationship whatsoever to MyISAM.

From: at: 2007-07-06 20:06:50

We had MySQL Cluster running as the backend for a cluster of webapps and it had numerous problems.  Many of them just from lack of needed features.  I opened a lot of enhancement requests to MySQL about these.  Some like all 'mysql' tables are not centrolized but separate would nearly drive you insane.  We had to try synchronizing the user tables between all the client nodes.  Another issue is that when a user installs a webapp, if one of the client nodes happened to be down for maintenance at the time, that node would never learn of the new database the user just setup so if the load balancer directed him to that client node later on everything would error.  There are just many architectural issues with MySQL Cluster that were never very well thought through.  It has a long way to go to being enterprise ready.  And performance was abysmal to boot.

 

From: Anonymous at: 2006-03-28 23:51:43

Memory only storage is a significant limitation. I hope this is fixed in a future version.

From: Anonymous at: 2006-04-23 15:32:17

Well, not anymore:

In MySQL 5.1, the memory-only requirement of MySQL Cluster is removed and operational data may now be accessed both on disk and memory. A DBA can specify that table data can reside on disk, in memory, or a combination of main memory and disk (although a single table can only be assigned to either disk or main memory). Disk-based support includes new storage structures - tablespaces – that are used to logically house table data on disk. In addition, new memory caches are in place to manage the transfer of data stored in tablespaces to memory for fast access to repeatedly referenced information.

From: Anonymous at: 2006-04-23 16:13:16

"Memory only storage is a significant limitation. I hope this is fixed in a future version." This isn't a limitation to be fixed, but the fundamental tradeoff in MySQL Cluster Server's design: by accepting the limitation of being memory-based instead of disk based, it can be several orders of magnitude faster. If your data can't fit in RAM, and you don't need the performance, you should use one of the disk-based table types.

From: Anonymous at: 2006-03-29 18:44:18

The cluster management software seems to be a single point of failure; that is, if the load balancer running this software goes down, doesn't the cluster either go down or end up with inconsistent data ("split brain", as referenced in the article)?

I'm very new to clustering, so I'd be happy to learn why I'm wrong!

From: Anonymous at: 2006-05-05 12:24:30

The storage and MySQL Server nodes are not dependent on the management server for their execution. Its purpose is only to manage the cluster. It may fail and be restarted any number of times without affecting the running MySQL Cluster.

From: Anonymous at: 2006-04-23 15:33:59

You use Debian and install package from source. Don't store files in /usr/bin. Use /usr/local or /opt.

Custom packages installed in /usr can be broken by Debian packaging system.

From: Anonymous at: 2006-08-27 00:30:27

Actually, if you use chkinstall instead of 'make instal', it'll add the package to your apt setup so the files won't be overwritten

From: at: 2007-05-31 14:28:28

You can make each balancer a management server and eliminate a single point of failure.

Install manager on both lb and add both to config.ini

 

# Managment Server 1

[NDB_MGMD]

HostName=192.168.0.8                      # the IP of the First Management Server

ID=1

Datadir=/var/lib/mysql-cluster

 

# Managment Server 2

[NDB_MGMD]

HostName=192.168.0.9                      # the IP of the Second Management Server

ID=2

Datadir=/var/lib/mysql-cluster

 

then on each data node modify my.cnf

 

[mysqld]
ndbcluster
ndb-connectstring = "host=192.168.0.8,host=192.168.0.9" 


[ndb_mgm]
connect-string = "host=192.168.0.8,host=192.168.0.9"

[ndbd]
connect-string = "host=192.168.0.8,host=192.168.0.9"

 

make sure to run ndbd --initial

 

 

 

From: at: 2008-12-22 12:39:24

I wanted to know whether or not there will be significant changes in performance if we run apache with load balancing enabled with mysql cluster.

Has anyone tried it before ?

Anjin

From: Elumalai Ranganathan at: 2009-05-26 07:51:23

 Thanks a lot! This document helped a lot in configuring MYSQL cluster. I have a query. I am going to configure web server on the nodes using Tomcat, Is it possible to use the mysql virtual ip for tomcat configurtion...

From: Anonymous at: 2010-12-06 09:40:32

http://www.dancryer.com/2010/01/mysql-circular-replication

 This is part 1 of a three posts series:
 - MySQL Load-Balanced Cluster Guide – Part 1 - setting up the servers themselves and configuring MySQL replication.

 - MySQL Load-Balanced Cluster Guide – Part 2 - set up a script to monitor the status of your MySQL cluster nodes, which we’ll use in the next guide to set up our proxy.

 - MySQL Load-Balanced Cluster Guide – Part 3 - setting up the load balancer with HAProxy, using the monitoring scripts

From: Altaf Hussain at: 2013-11-04 10:06:41

Very elaborative tutorial I must say. A noob coming here can do a lot after reading the tutorial !!

From: Jack Chen at: 2012-08-27 14:04:32

 Thanks for the detailed example.  I would like to share one problem I had when following this document.

When I first set it up on Centos, I only saw two ndbd nodes connected to the management node, there was no mysqld node connected.

Took me quite some time to figure out the problem : the ndbd process on the ndb node are listening on two or three random ports and the ndb_mgmd process on management node need to connect to those ports. After I stop iptables on the ndb nodes ( iptables on management node was already configured to allow incoming 1186 port connection ), the cluster was started.

Mysql's document for ndbd configuration seems very poor,  I couldn't find how to make the ndbd to use a fixed port, so I have to shutdown the iptables on ndb nodes.

Maybe it's because I am using a old version mysql 5.0.21? seems new version mysql doesn't have max version any more.

From: Pankaj at: 2014-03-11 19:29:46

Really appreciate this guide. The person who has written it made it very simple to setup a mysql cluster. I am very thankful to that person.