How to install Percona XtraDB Cluster for MySQL on Debian 8

In this article, I will show you how to configure a MySQL database cluster with three nodes in a multi-master replication. Multi-master replication allows writing of records in each node, so if a node will fail, we can work on the other as if nothing happened.

The official documentation of Percona can be found on the official website https://www.percona.com/

First of all, why we choose three nodes and not only two? In any cluster, the number of nodes should be odd, so in the case of disconnection of a node, we assume that the highest group of servers has the fresh data, and should be replicated to the down node to avoid data loss. This is related only to resolve conflicts in data replication, we won't loose data written only to the disconnected node.

This is used to avoid a circumstance called split brain, in which we can't automatically choose which node has correct data. Think for example of a 2 node cluster where both nodes are disconnected from each other, and the same record is written to both nodes: who wins when they come back online? We don't know, so split brain happens, and we have to manually decide wich record is the right one.

The number of nodes that is needed to determine wich part of the cluster has the right data is called QUORUM, in our case, the quorum will be 2. So we need 2 servers always be connected to each other. In case all three nodes will go down, we have a split brain and we must decide wich server should go in bootstrap mode manually, this is the procedure to determine wich will be the main server to resume from the split brain.

Configuring Percona XtraDB Cluster on Debian 8

This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on Debian 8 servers, we will be using the packages from the Percona repositories.

  • server 1
    • Hostname: mysql1.local.vm
    • IP address: 192.168.152.100
  • Node 2
    • Hostname: mysql2.local.vm
    • IP address: 192.168.152.110
  • Node 3
    • Hostname: mysql3.local.vm
    • IP address: 192.168.152.120

On each host, modify file /etc/hosts as follows to ensure DNS will work correctly.

127.0.0.1 localhost
192.168.152.100 mysql1.local.vm mysql1
192.168.152.110 mysql2.local.vm mysql2
192.168.152.120 mysql3.local.vm mysql3

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Prerequisites

The procedure described in this tutorial requires the following minimal server setup:

Step 1. Installing Percona Xtradb Cluster

On all nodes, execute the following commands as root:

wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
apt-get update
apt-get -y install percona-xtradb-cluster-57

Obviouslly, enter the mysql password you like to choose.

After the packages have been installed, mysqld will start automatically. Stop mysqld on all three nodes using /etc/init.d/mysql stop.

Step 2. Configuring the first node

Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the cluster.

  1. Make sure to add these lines to configuration file /etc/mysql/my.cnf for the first node (mysql1.local.vm) at the end of [mysqld] section:

    [mysqld]

    ... # Path to Galera library
    wsrep_provider=/usr/lib/libgalera_smm.so

    # Cluster connection URL contains the IPs of node#1, node#2 and node#3
    wsrep_cluster_address=gcomm://192.168.152.100,192.168.152.110,192.168.152.120

    # In order for Galera to work correctly binlog format should be ROW
    binlog_format=ROW

    # MyISAM storage engine has only experimental support
    default_storage_engine=InnoDB

    # This InnoDB autoincrement locking mode is a requirement for Galera
    innodb_autoinc_lock_mode=2

    # Node #1 address
    wsrep_node_address=192.168.152.100

    # SST method
    wsrep_sst_method=xtrabackup-v2

    # Cluster name
    wsrep_cluster_name=my_ubuntu_cluster

    # Authentication for SST method
    wsrep_sst_auth="sstuser:PASSW0RD"

    Pay attention to the password you setup there in my case "PASSW0RD".

  2. Start the first node with the following command:

    root@mysql1:~# /etc/init.d/mysql bootstrap-pxc
    

    This command will start the first node and bootstrap the cluster, you will see something like this if all is ok:

    root@mysql1:~# /etc/init.d/mysql bootstrap-pxc
    [ ok ] Bootstrapping Percona XtraDB Cluster database server: mysqld ..
    root@mysql1:~#
  3. After the first node has been started, connect to mysql with classic mysql -p command, then cluster status can be checked executing the query show status like 'wsrep%';as in the example below:

  4. mysql> show status like 'wsrep%';
    +----------------------------+--------------------------------------+
    | Variable_name              | Value                                |
    +----------------------------+--------------------------------------+
    | wsrep_local_state_uuid     | 0251a27c-8a19-11e6-905b-f3f13b0ddc5b |
    ...
    | wsrep_local_state          | 4                                    |
    | wsrep_local_state_comment  | Synced                               |
    ...
    | wsrep_cluster_size         | 1                                    |
    | wsrep_cluster_status       | Primary                              |
    | wsrep_connected            | ON                                   |
    ...
    | wsrep_ready                | ON                                   |
    +----------------------------+--------------------------------------+
    59 rows in set (0.00 sec)
    

This output shows that the cluster has been successfully bootstrapped.

To perform State Snapshot Transfer using XtraBackup, set up a new user with proper privileges:

mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'PASSW0RD';
mysql> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
mysql> FLUSH PRIVILEGES;

Note

The MySQL root account can also be used for performing SST, but it is more secure to use a different (non-root) user for this.

Step 3. Configuring the second node

  1. Append the following lines to the configuration file /etc/mysql/my.cnf on the second node (mysql2.local.vm), so that it contains the following data:

    [mysqld]

    ... # Path to Galera library
    wsrep_provider=/usr/lib/libgalera_smm.so

    # Cluster connection URL contains the IPs of node#1, node#2 and node#3
    wsrep_cluster_address=gcomm://192.168.152.100,192.168.152.110,192.168.152.120

    # In order for Galera to work correctly binlog format should be ROW
    binlog_format=ROW

    # MyISAM storage engine has only experimental support
    default_storage_engine=InnoDB

    # This InnoDB autoincrement locking mode is a requirement for Galera
    innodb_autoinc_lock_mode=2

    # Node #2 address
    wsrep_node_address=192.168.152.110

    # SST method
    wsrep_sst_method=xtrabackup-v2

    # Cluster name
    wsrep_cluster_name=my_ubuntu_cluster

    # Authentication for SST method
    wsrep_sst_auth="sstuser:PASSW0RD"
  2. Start the second node with the following command (attention this time as you can see is not in boostrap mode!!):

    root@mysql2:~# /etc/init.d/mysql start
    
  3. After the server has been started, it should receive SST automatically. Cluster status can now be checked on both nodes. The following is an example of the status from the second node (mysql2.local.vm):

  4. mysql> show status like 'wsrep%';
    +----------------------------+--------------------------------------+
    | Variable_name              | Value                                |
    +----------------------------+--------------------------------------+
    | wsrep_local_state_uuid     | 0251a27c-8a19-11e6-905b-f3f13b0ddc5b |
    ...
    | wsrep_local_state          | 4                                    |
    | wsrep_local_state_comment  | Synced                               |
    ...
    | wsrep_cluster_size         | 2                                    |
    | wsrep_cluster_status       | Primary                              |
    | wsrep_connected            | ON                                   |
    ...
    | wsrep_ready                | ON                                   |
    +----------------------------+--------------------------------------+
    40 rows in set (0.01 sec)
    

    This output shows that the new node has been successfully added to the cluster. Note the variable wsrep_cluster_size wich has become 2, instead of one of the first query we made.

Step 4. Configuring the third node

  1. Append the following lines to the configuration file /etc/mysql/my.cnf on the second node (mysql3.local.vm), so it contains the following configuration:

    [mysqld]

    ... # Path to Galera library
    wsrep_provider=/usr/lib/libgalera_smm.so

    # Cluster connection URL contains the IPs of node#1, node#2 and node#3
    wsrep_cluster_address=gcomm://192.168.152.100,192.168.152.110,192.168.152.120

    # In order for Galera to work correctly binlog format should be ROW
    binlog_format=ROW

    # MyISAM storage engine has only experimental support
    default_storage_engine=InnoDB

    # This InnoDB autoincrement locking mode is a requirement for Galera
    innodb_autoinc_lock_mode=2

    # Node #2 address
    wsrep_node_address=192.168.152.120

    # SST method
    wsrep_sst_method=xtrabackup-v2

    # Cluster name
    wsrep_cluster_name=my_ubuntu_cluster

    # Authentication for SST method
    wsrep_sst_auth="sstuser:PASSW0RD"
  2. Start the third node with the following command:

    root@mysql3:~# /etc/init.d/mysql start
    
  3. After the server has been started, it should receive SST automatically. Cluster status can be checked on all nodes. The following is an example of status from the third node (mysql3.local.vm):

    mysql> show status like 'wsrep%';
    +----------------------------+--------------------------------------+
    | Variable_name              | Value                                |
    +----------------------------+--------------------------------------+
    | wsrep_local_state_uuid     | 0251a27c-8a19-11e6-905b-f3f13b0ddc5b |
    ...
    | wsrep_local_state          | 4                                    |
    | wsrep_local_state_comment  | Synced                               |
    ...
    | wsrep_cluster_size         | 3                                    |
    | wsrep_cluster_status       | Primary                              |
    | wsrep_connected            | ON                                   |
    ...
    | wsrep_ready                | ON                                   |
    +----------------------------+--------------------------------------+
    40 rows in set (0.01 sec)
    

    This output confirms that the third node has joined the cluster. Again take a look at wsrep_cluster_size wich now has become 3, instead of 2.

If you encounter some problems, take a look at /var/log/syslog to look if all is ok

Oct 4 12:16:13 mysql3 mysql[2767]: Starting MySQL (Percona XtraDB Cluster) database server: mysqld . . .State transfer in progress, setting sleep higher: mysqld . ..
Oct 4 12:16:13 mysql3 systemd[1]: Started LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon.
Oct 4 12:17:01 mysql3 CRON[3731]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)

In this exaple, all went well, and you can see the State Transfer in progress, meaning that data will be transfered to the node.

Testing replication

To test the replication, lets create a new database on the second node, create a table for that database on the third node, and add some records to the table on the first node.

  1. Create a new database on the second node:

    mysql@mysql2> CREATE DATABASE percona;
    Query OK, 1 row affected (0.01 sec)
    
  2. Create a table on the third node:

    mysql@mysql3> USE percona;
    Database changed
    
    mysql@pxc3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
    Query OK, 0 rows affected (0.05 sec)
    
  3. Insert records on the first node:

    mysql@mysql1> INSERT INTO percona.example VALUES (1, 'percona1');
    Query OK, 1 row affected (0.02 sec)
    
  4. Retrieve all the rows from that table on the second node:

    mysql@mysql2> SELECT * FROM percona.example;
    +---------+-----------+
    | node_id | node_name |
    +---------+-----------+
    |       1 | percona1  |
    +---------+-----------+
    1 row in set (0.00 sec)
    

To ensure that your application can always reach the cluster, you can add a loadbalancer in front of the three nodes.

Share this page:

0 Comment(s)