How To Set Up A Load-Balanced MySQL Cluster - Page 7

6.4 Create A Database Called ldirector

Next we create the ldirector database on our MySQL cluster nodes sql1.example.com and sql2.example.com. This database will be used by our load balancers to check the availability of the MySQL cluster nodes.

sql1.example.com:

mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
USE ldirectordb;
CREATE TABLE connectioncheck (i INT) ENGINE=NDBCLUSTER;
INSERT INTO connectioncheck () VALUES (1);
quit;

sql2.example.com:

mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
quit;


6.5 Prepare The MySQL Cluster Nodes For Load Balancing

Finally we must configure our MySQL cluster nodes sql1.example.com and sql2.example.com to accept requests on the virtual IP address 192.168.0.105.

sql1.example.com / sql2.example.com:

apt-get install iproute

Add the following to /etc/sysctl.conf:

sql1.example.com / sql2.example.com:

vi /etc/sysctl.conf

# Enable configuration of arp_ignore option
net.ipv4.conf.all.arp_ignore = 1

# When an arp request is received on eth0, only respond if that address is
# configured on eth0. In particular, do not respond if the address is
# configured on lo
net.ipv4.conf.eth0.arp_ignore = 1

# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_ignore = 1


# Enable configuration of arp_announce option
net.ipv4.conf.all.arp_announce = 2

# When making an ARP request sent through eth0 Always use an address that
# is configured on eth0 as the source address of the ARP request. If this
# is not set, and packets are being sent out eth0 for an address that is on
# lo, and an arp request is required, then the address on lo will be used.
# As the source IP address of arp requests is entered into the ARP cache on
# the destination, it has the effect of announcing this address. This is
# not desirable in this case as adresses on lo on the real-servers should
# be announced only by the linux-director.
net.ipv4.conf.eth0.arp_announce = 2

# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_announce = 2

sysctl -p

Add this section for the virtual IP address to /etc/network/interfaces:

sql1.example.com / sql2.example.com:

vi /etc/network/interfaces

auto lo:0
iface lo:0 inet static
address 192.168.0.105
netmask 255.255.255.255
pre-up sysctl -p > /dev/null

ifup lo:0

Share this page:

19 Comment(s)

Add comment

Comments

From: Anonymous at: 2006-03-27 18:10:41

This is rather unfortunate, but without foreign keys support and memory-only storage MySQL cluster is not a really viable solution for most RDBMS users.

From: Anonymous at: 2006-03-28 22:06:44

The InnoDB engine supports foriegn keys and works with the MySQL cluster so your comment is incorrect sir.

From: Anonymous at: 2006-03-31 07:26:23

Actually you are wrong mate. While InnoDB in MySQL supports foriegn keys, you can not use InnoDB while setting up the MySQL Cluster support described in this article. You can only use the NDB backend which is a simplified version of MyISAM. It also has the limitation of being completely memory resident.

From: Anonymous at: 2006-05-28 17:22:00

NDB Cluster has *nothing* to do with MyISAM. NDB has a long history outside of MySQL, and it has no relationship whatsoever to MyISAM.

From: at: 2007-07-06 20:06:50

We had MySQL Cluster running as the backend for a cluster of webapps and it had numerous problems.  Many of them just from lack of needed features.  I opened a lot of enhancement requests to MySQL about these.  Some like all 'mysql' tables are not centrolized but separate would nearly drive you insane.  We had to try synchronizing the user tables between all the client nodes.  Another issue is that when a user installs a webapp, if one of the client nodes happened to be down for maintenance at the time, that node would never learn of the new database the user just setup so if the load balancer directed him to that client node later on everything would error.  There are just many architectural issues with MySQL Cluster that were never very well thought through.  It has a long way to go to being enterprise ready.  And performance was abysmal to boot.

 

From: Anonymous at: 2006-03-28 23:51:43

Memory only storage is a significant limitation. I hope this is fixed in a future version.

From: Anonymous at: 2006-04-23 15:32:17

Well, not anymore:

In MySQL 5.1, the memory-only requirement of MySQL Cluster is removed and operational data may now be accessed both on disk and memory. A DBA can specify that table data can reside on disk, in memory, or a combination of main memory and disk (although a single table can only be assigned to either disk or main memory). Disk-based support includes new storage structures - tablespaces – that are used to logically house table data on disk. In addition, new memory caches are in place to manage the transfer of data stored in tablespaces to memory for fast access to repeatedly referenced information.

From: Anonymous at: 2006-04-23 16:13:16

"Memory only storage is a significant limitation. I hope this is fixed in a future version." This isn't a limitation to be fixed, but the fundamental tradeoff in MySQL Cluster Server's design: by accepting the limitation of being memory-based instead of disk based, it can be several orders of magnitude faster. If your data can't fit in RAM, and you don't need the performance, you should use one of the disk-based table types.

From: Anonymous at: 2006-03-29 18:44:18

The cluster management software seems to be a single point of failure; that is, if the load balancer running this software goes down, doesn't the cluster either go down or end up with inconsistent data ("split brain", as referenced in the article)?

I'm very new to clustering, so I'd be happy to learn why I'm wrong!

From: Anonymous at: 2006-05-05 12:24:30

The storage and MySQL Server nodes are not dependent on the management server for their execution. Its purpose is only to manage the cluster. It may fail and be restarted any number of times without affecting the running MySQL Cluster.

From: Anonymous at: 2006-04-23 15:33:59

You use Debian and install package from source. Don't store files in /usr/bin. Use /usr/local or /opt.

Custom packages installed in /usr can be broken by Debian packaging system.

From: Anonymous at: 2006-08-27 00:30:27

Actually, if you use chkinstall instead of 'make instal', it'll add the package to your apt setup so the files won't be overwritten

From: at: 2007-05-31 14:28:28

You can make each balancer a management server and eliminate a single point of failure.

Install manager on both lb and add both to config.ini

 

# Managment Server 1

[NDB_MGMD]

HostName=192.168.0.8                      # the IP of the First Management Server

ID=1

Datadir=/var/lib/mysql-cluster

 

# Managment Server 2

[NDB_MGMD]

HostName=192.168.0.9                      # the IP of the Second Management Server

ID=2

Datadir=/var/lib/mysql-cluster

 

then on each data node modify my.cnf

 

[mysqld]
ndbcluster
ndb-connectstring = "host=192.168.0.8,host=192.168.0.9" 


[ndb_mgm]
connect-string = "host=192.168.0.8,host=192.168.0.9"

[ndbd]
connect-string = "host=192.168.0.8,host=192.168.0.9"

 

make sure to run ndbd --initial

 

 

 

From: at: 2008-12-22 12:39:24

I wanted to know whether or not there will be significant changes in performance if we run apache with load balancing enabled with mysql cluster.

Has anyone tried it before ?

Anjin

From: Elumalai Ranganathan at: 2009-05-26 07:51:23

 Thanks a lot! This document helped a lot in configuring MYSQL cluster. I have a query. I am going to configure web server on the nodes using Tomcat, Is it possible to use the mysql virtual ip for tomcat configurtion...

From: Anonymous at: 2010-12-06 09:40:32

http://www.dancryer.com/2010/01/mysql-circular-replication

 This is part 1 of a three posts series:
 - MySQL Load-Balanced Cluster Guide – Part 1 - setting up the servers themselves and configuring MySQL replication.

 - MySQL Load-Balanced Cluster Guide – Part 2 - set up a script to monitor the status of your MySQL cluster nodes, which we’ll use in the next guide to set up our proxy.

 - MySQL Load-Balanced Cluster Guide – Part 3 - setting up the load balancer with HAProxy, using the monitoring scripts

From: Altaf Hussain at: 2013-11-04 10:06:41

Very elaborative tutorial I must say. A noob coming here can do a lot after reading the tutorial !!

From: Jack Chen at: 2012-08-27 14:04:32

 Thanks for the detailed example.  I would like to share one problem I had when following this document.

When I first set it up on Centos, I only saw two ndbd nodes connected to the management node, there was no mysqld node connected.

Took me quite some time to figure out the problem : the ndbd process on the ndb node are listening on two or three random ports and the ndb_mgmd process on management node need to connect to those ports. After I stop iptables on the ndb nodes ( iptables on management node was already configured to allow incoming 1186 port connection ), the cluster was started.

Mysql's document for ndbd configuration seems very poor,  I couldn't find how to make the ndbd to use a fixed port, so I have to shutdown the iptables on ndb nodes.

Maybe it's because I am using a old version mysql 5.0.21? seems new version mysql doesn't have max version any more.

From: Pankaj at: 2014-03-11 19:29:46

Really appreciate this guide. The person who has written it made it very simple to setup a mysql cluster. I am very thankful to that person.