How to Setup Three Node MySQL 8 Cluster on Debian 10

MySQL is a free, open-source, and relational database management system. It is used to store data in various purposes including, data warehousing, e-commerce, and logging applications. MySQL Cluster is a technology that provides scalability and availability at a low cost.

In this tutorial, we will use one master node to store the cluster's configuration and two data nodes to store the cluster data. The IP address of each node is shown below:

  • Master or Manager Node - 104.245.33.61
  • Data Node1 - 104.245.32.195
  • Data Node2 - 69.87.218.169

Prerequisites

  • Three servers running Debian 10 one for the manager node and the others are data nodes.
  • A root password is set up on each server.

Getting Started

First, you will need to update all servers to the latest version. You can update all with the following command:

apt-get update -y

Once all the servers are updated, you can proceed to the next step.

Install and Setup MySQL Cluster Manager

First, you will need to install the MySQL cluster management package on the Master node. By default, this package is not included in the Debian 10 default repository. So you will need to download it from the MySQL official website.

You can download it with the following command:

wget https://cdn.mysql.com//Downloads/MySQL-Cluster-8.0/mysql-cluster-community-management-server_8.0.24-1debian10_amd64.deb

After downloading the deb file, install it using the following command:

dpkg -i mysql-cluster-community-management-server_8.0.24-1debian10_amd64.deb

Once the installation is complete, create a directory to store cluster configuration with the following command:

mkdir /var/lib/mysql-cluster

Next, create a new configuration file with the following command:

nano /var/lib/mysql-cluster/config.ini

Add the following lines:

[ndbd default]
NoOfReplicas=2  # Number of replicas

[ndb_mgmd]
# Management process options:
hostname=104.245.33.61 #IP of the MySQL Cluster Manager
datadir=/var/lib/mysql-cluster
[ndbd]
hostname=104.245.32.195 #IP of the first data node
NodeId=2            # Node ID for this data node
datadir=/usr/local/mysql/data   # Remote directory for the data files
[ndbd]
hostname=69.87.218.169 #IP of the second data node
NodeId=3            # Node ID for this data node
datadir=/usr/local/mysql/data   # Remote directory for the data files
[mysqld]
# SQL node options:
hostname=104.245.33.61 #IP of the MySQL Cluster Manager

Save and close the file when you are finished then start the cluster manager with the following command:

ndb_mgmd -f /var/lib/mysql-cluster/config.ini

You should get the following output:

MySQL Cluster Management Server mysql-8.0.24 ndb-8.0.24
2021-05-10 08:23:05 [MgmtSrvr] INFO     -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it...
2021-05-10 08:23:05 [MgmtSrvr] INFO     -- Sucessfully created config directory

Now, kill the running server with the following command:

pkill -f ndb_mgmd

Next, create a systemd service file for MySQL Cluster Manager to manage the service:

nano /etc/systemd/system/ndb_mgmd.service

Add the following lines:

[Unit]
Description=MySQL NDB Cluster Management Server
After=network.target auditd.service
[Service]
Type=forking
ExecStart=/usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

Save and close the file then reload the systemd daemon with the following command:

systemctl daemon-reload

Next, start the Cluster Manager and enable it to start at system reboot with the following command:

systemctl start ndb_mgmd
systemctl enable ndb_mgmd

You can now check the status of the service with the following command:

systemctl status ndb_mgmd

You should get the following output:

    ? ndb_mgmd.service - MySQL NDB Cluster Management Server
   Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; disabled; vendor preset: enabled)
   Active: active (running) since Mon 2021-05-10 08:23:53 UTC; 6s ago
  Process: 984 ExecStart=/usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini (code=exited, status=0/SUCCESS)
 Main PID: 985 (ndb_mgmd)
    Tasks: 12 (limit: 2359)
   Memory: 6.5M
   CGroup: /system.slice/ndb_mgmd.service
           ??985 /usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini

May 10 08:23:53 master systemd[1]: Starting MySQL NDB Cluster Management Server...
May 10 08:23:53 master ndb_mgmd[984]: MySQL Cluster Management Server mysql-8.0.24 ndb-8.0.24
May 10 08:23:53 master systemd[1]: Started MySQL NDB Cluster Management Server.

Once you are finished, you can proceed to the next step.

Install and Configure Data Nodes

Next, we will data node package on other servers and configure it to communicate with MySQL Cluster Manager.

First, install the required dependencies on both data nodes with the following command:

apt-get install libclass-methodmaker-perl -y

Once all the dependencies are installed, download the latest version of MySQL Data Nodes deb file with the following command:

wget https://cdn.mysql.com//Downloads/MySQL-Cluster-8.0/mysql-cluster-community-data-node_8.0.24-1debian10_amd64.deb

Once the download is completed, install the downloaded file with the following command:

dpkg -i mysql-cluster-community-data-node_8.0.24-1debian10_amd64.deb

By default, data nodes store all configurations in the /etc/my.cnf file.

So create a new /etc/my.cnf file on both data nodes with the following command:

nano /etc/my.cnf

Add the cluster IP as shown below:

[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=104.245.33.61 #IP of the MySQL Cluster Manager

Save and close the file then create a data directory on both data nodes:

mkdir -p /usr/local/mysql/data

Now, start the data nodes with the following command:

ndbd

You should get the following output:

2021-05-10 08:27:13 [ndbd] INFO     -- Angel connected to '104.245.33.61:1186'
2021-05-10 08:27:13 [ndbd] INFO     -- Angel allocated nodeid: 2

Next, kill the running ndbd process with the following command:

pkill -f ndbd

Next, create a systemd service file for ndbd on both data nodes with the following command:

nano /etc/systemd/system/ndbd.service

Add the following lines:

[Unit]
Description=MySQL NDB Data Node Daemon
After=network.target auditd.service
[Service]
Type=forking
ExecStart=/usr/sbin/ndbd
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

Save and close the file then reload the systemd daemon with the following command:

systemctl daemon-reload

Next, start the ndbd service and make it to start at system reboot with the following command:

systemctl start ndbd
systemctl enable ndbd

You can also check the status of ndbd service using the following command:

systemctl status ndbd

You should get the following output:

? ndbd.service - MySQL NDB Data Node Daemon
   Loaded: loaded (/etc/systemd/system/ndbd.service; disabled; vendor preset: enabled)
   Active: active (running) since Mon 2021-05-10 08:28:28 UTC; 12s ago
  Process: 740 ExecStart=/usr/sbin/ndbd (code=exited, status=0/SUCCESS)
 Main PID: 741 (ndbd)
    Tasks: 46 (limit: 2359)
   Memory: 827.1M
   CGroup: /system.slice/ndbd.service
           ??741 /usr/sbin/ndbd
           ??742 /usr/sbin/ndbd

May 10 08:28:28 data1 systemd[1]: Starting MySQL NDB Data Node Daemon...
May 10 08:28:28 data1 ndbd[740]: 2021-05-10 08:28:28 [ndbd] INFO     -- Angel connected to '104.245.33.61:1186'
May 10 08:28:28 data1 ndbd[740]: 2021-05-10 08:28:28 [ndbd] INFO     -- Angel allocated nodeid: 2
May 10 08:28:28 data1 systemd[1]: Started MySQL NDB Data Node Daemon.

Install and Configure MySQL Server

Next, we will download and install the MySQL Server and Client packages on the MySQL Cluster Manager Node.

First, download the latest version of MySQL Cluster Bundle using the following command:

wget https://cdn.mysql.com/Downloads/MySQL-Cluster-8.0/mysql-cluster_8.0.24-1debian10_amd64.deb-bundle.tar

Next, extract the downloaded file inside the /opt directory with the following command:

tar -xvf mysql-cluster_8.0.24-1debian10_amd64.deb-bundle.tar -C /opt

Next, change the directory to /opt and install the required dependencies with the following command:

cd /opt
apt-get install libaio1 libmecab2 libnuma1 psmisc -y

Next, install the MySQL Common deb file with the following command:

dpkg -i mysql-common*

Next, install the MySQL Client packages with the following command:

dpkg -i mysql-cluster-community-client_8.0.24-1debian10_amd64.deb mysql-cluster-community-client-core_8.0.24-1debian10_amd64.deb mysql-cluster-community-client-plugins_8.0.24-1debian10_amd64.deb
dpkg -i mysql-client_8.0.24-1debian10_amd64.deb
dpkg -i mysql-cluster-community-server*

If you get any dependency error, run the following command:

apt-get install -f

Next, install the MySQL Server package with the following command:

dpkg -i mysql-server_8.0.24-1debian10_amd64.deb

During the installation, you will be prompt to set a MySQL root password.

After installing the MySQL server, you will need to edit the MySQL main configuration file and define the cluster IP:

nano /etc/mysql/my.cnf

Add the following lines:

[mysqld]
# Options for mysqld process:
ndbcluster                      # run NDB storage engine
[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=104.245.33.61  #IP of the MySQL Cluster Manager

Save and close the file when you are finished. Next, restart the MySQL service and enable it to start at system reboot:

systemctl restart mysql
systemctl enable mysql

You can check the MySQL server status with the following command:

systemctl status mysql

You should see the following output:

? mysql.service - MySQL Cluster Community Server
   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2021-05-10 08:35:04 UTC; 7s ago
     Docs: man:mysqld(8)
           http://dev.mysql.com/doc/refman/en/using-systemd.html
  Process: 1950 ExecStartPre=/usr/share/mysql-8.0/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
 Main PID: 1985 (mysqld)
   Status: "Server is operational"
    Tasks: 47 (limit: 2359)
   Memory: 372.0M
   CGroup: /system.slice/mysql.service
           ??1985 /usr/sbin/mysqld

May 10 08:35:01 master systemd[1]: Starting MySQL Cluster Community Server...
May 10 08:35:04 master systemd[1]: Started MySQL Cluster Community Server.

Once you are finished, you can proceed to the next step.

Test MySQL Cluster

Now, we have installed and configured the three-node MySQL Cluster successfully. Next, you will need to verify whether it is working or not.

First, login to the MySQL Cluster Manager node and login to MySQL with the following command:

mysql -u root -p

Provide your MySQL root password then check the cluster information with the following command:

mysql> SHOW ENGINE NDB STATUS \G

If everything is fine, you should get the following output:

*************************** 1. row ***************************
  Type: ndbclus
  Name: connection
Status: cluster_node_id=4, connected_host=104.245.33.61, connected_port=1186, number_of_data_nodes=2, number_of_ready_data_nodes=2, connect_count=0
*************************** 2. row ***************************
  Type: ndbclus
  Name: NdbTransaction
Status: created=2, free=2, sizeof=392
*************************** 3. row ***************************
  Type: ndbclus
  Name: NdbOperation
Status: created=4, free=4, sizeof=944
*************************** 4. row ***************************
  Type: ndbclus
  Name: NdbIndexScanOperation
Status: created=0, free=0, sizeof=1152
*************************** 5. row ***************************
  Type: ndbclus
  Name: NdbIndexOperation
Status: created=0, free=0, sizeof=952
*************************** 6. row ***************************
  Type: ndbclus
  Name: NdbRecAttr
Status: created=0, free=0, sizeof=88
*************************** 7. row ***************************
  Type: ndbclus
  Name: NdbApiSignal
Status: created=16, free=16, sizeof=144
*************************** 8. row ***************************
  Type: ndbclus
  Name: NdbLabel
Status: created=0, free=0, sizeof=200
*************************** 9. row ***************************
  Type: ndbclus
  Name: NdbBranch
Status: created=0, free=0, sizeof=32
*************************** 10. row ***************************
  Type: ndbclus
  Name: NdbSubroutine
Status: created=0, free=0, sizeof=72
*************************** 11. row ***************************
  Type: ndbclus
  Name: NdbCall
Status: created=0, free=0, sizeof=24
*************************** 12. row ***************************
  Type: ndbclus
  Name: NdbBlob
Status: created=0, free=0, sizeof=592
*************************** 13. row ***************************
  Type: ndbclus
  Name: NdbReceiver
Status: created=0, free=0, sizeof=128
*************************** 14. row ***************************
  Type: ndbclus
  Name: NdbLockHandle
Status: created=0, free=0, sizeof=48
*************************** 15. row ***************************
  Type: ndbclus
  Name: binlog
Status: latest_epoch=897648164875, latest_trans_epoch=820338753551, latest_received_binlog_epoch=0, latest_handled_binlog_epoch=0, latest_applied_binlog_epoch=0
15 rows in set (0.00 sec)

Now, exit from the MySQL shell with the following command:

mysql> exit

Now, we will perform another test to confirm that the cluster is functioning properly.

Connect the Cluster management console with the following command:

ndb_mgm

You should see the following output:

-- NDB Cluster -- Management Client --
ndb_mgm>

Now, run the following command to check all data ndoes:

ndb_mgm> SHOW

You should get the following output:

Connected to Management Server at: 104.245.33.61:1186
Cluster Configuration
---------------------
[ndbd(NDB)]	2 node(s)
id=2	@104.245.32.195  (mysql-8.0.24 ndb-8.0.24, Nodegroup: 0, *)
id=3	@69.87.218.169  (mysql-8.0.24 ndb-8.0.24, Nodegroup: 0)

[ndb_mgmd(MGM)]	1 node(s)
id=1	@104.245.33.61  (mysql-8.0.24 ndb-8.0.24)

[mysqld(API)]	1 node(s)
id=4	@104.245.33.61  (mysql-8.0.24 ndb-8.0.24)

To check the status of the first data node, run the following command:

ndb_mgm> 2 STATUS

You should see the following output:

Node 2: started (mysql-8.0.24 ndb-8.0.24)

To check the status of the second data node, run the following command:

ndb_mgm> 3 STATUS

You should see the following output:

Node 3: started (mysql-8.0.24 ndb-8.0.24)

Conclusion

Congratulations! you have successfully installed and set up a three-node MySQL Cluster on Debian 10 server. You can now use this setup in the production environment to achieve scalability and availability.

Share this page:

Suggested articles

2 Comment(s)

Add comment

Comments

By: Harold Zubieta at: 2021-05-30 01:29:15

Thanks for sharing! In my case it threw an error (unable to connect) when executing ndbd on data node. Looks like mysql client needs to be installed first, then it ran fine.

By: hossein at: 2021-07-13 18:16:27

Dear jethva

Thank you so much for your realy practical document.

I follow and did your instrauction step by step.

in SQL data nodes when I start data node with command ndbd, I get response like below:

Angel connected to '*****:1186'Failed to allocate nodeid. error: Illegal reply from server

in management cluster everything is OK. and I get result like you when I use this command: mysql> SHOW ENGINE NDB STATUS \G

ndb_mgm> SHOW

I get the following output:

Connected to Management Server at: 104.245.33.61:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 (not connected, accept connecting from *****) id=3 (not connected, accept connecting from *****) [ndb_mgmd(MGM)] 1 node(s) id=1 @**** (mysql-8.0.24 ndb-8.0.24) [mysqld(API)] 1 node(s) id=4 (not connected, accept connecting from *****) I am really confused.in advance i will be indebted to you for your concern.