Setting Up Master-Master Replication On Four Nodes With MySQL 5 On Debian Etch - Page 2
4 Setting Up Replication
Now we set up master-master replication in /etc/mysql/my.cnf. The crucial configuration options for master-master replication are auto_increment_increment and auto_increment_offset:
- auto_increment_increment controls the increment between successive AUTO_INCREMENT values.
- auto_increment_offset determines the starting point for AUTO_INCREMENT column values.
Let's assume we have N MySQL nodes (N=4 in this example), then auto_increment_increment has the value N on all nodes, and each node must have a different value for auto_increment_offset (1, 2, ..., N).
We also need to configure log-slave-updates because otherwise replication will work only, for example, from server1 to server2, but not to server3 and server4.
Now let's configure our four MySQL nodes:
server1:
vi /etc/mysql/my.cnf
Search for the section that starts with [mysqld], and put the following options into it (commenting out all existing conflicting options):
[...] [mysqld] server-id = 1 replicate-same-server-id = 0 auto-increment-increment = 4 auto-increment-offset = 1 master-host = 192.168.0.103 master-user = slaveuser_for_s1 master-password = slave_user_for_server1_password master-connect-retry = 60 replicate-do-db = exampledb log-bin = /var/log/mysql/mysql-bin.log binlog-do-db = exampledb log-slave-updates relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M [...] |
Then restart MySQL:
/etc/init.d/mysql restart
Now do the same on server2...
server2:
vi /etc/mysql/my.cnf
[...] [mysqld] server-id = 2 replicate-same-server-id = 0 auto-increment-increment = 4 auto-increment-offset = 2 master-host = 192.168.0.100 master-user = slaveuser_for_s2 master-password = slave_user_for_server2_password master-connect-retry = 60 replicate-do-db = exampledb log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = exampledb log-slave-updates relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M [...] |
/etc/init.d/mysql restart
...server3...
server3:
vi /etc/mysql/my.cnf
[...] [mysqld] server-id = 3 replicate-same-server-id = 0 auto-increment-increment = 4 auto-increment-offset = 3 master-host = 192.168.0.101 master-user = slaveuser_for_s3 master-password = slave_user_for_server3_password master-connect-retry = 60 replicate-do-db = exampledb log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = exampledb log-slave-updates relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M [...] |
/etc/init.d/mysql restart
... and server4:
server4:
vi /etc/mysql/my.cnf
[...] [mysqld] server-id = 4 replicate-same-server-id = 0 auto-increment-increment = 4 auto-increment-offset = 4 master-host = 192.168.0.102 master-user = slaveuser_for_s4 master-password = slave_user_for_server4_password master-connect-retry = 60 replicate-do-db = exampledb log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = exampledb log-slave-updates relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M [...] |
/etc/init.d/mysql restart
Before we continue, we must make sure that no slave processes are running on server1 to server4:
server1/server2/server3/server4:
/usr/bin/mysqladmin --user=root --password=yourrootsqlpassword stop-slave
Next we lock the exampledb database on server1, find out about the master status of server1, create an SQL dump of exampledb (that we will import into exampledb on server2, server3, and server4 so that all four databases contain the same data), and unlock the database so that it can be used again:
server1:
mysql -u root -p
On the MySQL shell, run the following commands:
server1:
USE exampledb;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
The last command should show something like this (please write it down, we'll need it later on):
mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000010 | 98 | exampledb | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
mysql>
Now don't leave the MySQL shell, because if you leave it, the database lock will be removed, and this is not what we want right now because we must create a database dump now. While the MySQL shell is still open, we open a second command line window where we create the SQL dump snapshot.sql and transfer it to server2, server3, and server4 (using scp):
server1:
cd /tmp
mysqldump -u root -pyourrootsqlpassword --opt exampledb > snapshot.sql
scp snapshot.sql [email protected]:/tmp
scp snapshot.sql [email protected]:/tmp
scp snapshot.sql [email protected]:/tmp
Afterwards, you can close the second command line window. On the first command line window, we can now unlock the database and leave the MySQL shell:
server1:
UNLOCK TABLES;
quit;
4.1 Setting Up Replication On server2
(This chapter is for server2 only!)
On server2, we can now import the SQL dump snapshot.sql like this:
server2:
/usr/bin/mysqladmin --user=root --password=yourrootsqlpassword stop-slave
cd /tmp
mysql -u root -pyourrootsqlpassword exampledb < snapshot.sql
Afterwards, we must find out about the master status of server2 as well and write it down:
mysql -u root -p
USE exampledb;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000010 | 1067 | exampledb | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
mysql>
Then unlock the tables:
UNLOCK TABLES;
and run the following command to make server2 a slave of server1 (it is important that you replace the values in the following command with the values you got from the SHOW MASTER STATUS; command that we ran on server1!):
CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slaveuser_for_s2', MASTER_PASSWORD='slave_user_for_server2_password', MASTER_LOG_FILE='mysql-bin.000010', MASTER_LOG_POS=98;
Finally start the slave:
START SLAVE;
Then check the slave status:
SHOW SLAVE STATUS \G
It is important that both Slave_IO_Running and Slave_SQL_Running have the value Yes in the output (otherwise something went wrong, and you should check your setup again and take a look at /var/log/syslog to find out about any errors):
mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.0.100
Master_User: slaveuser_for_s2
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000010
Read_Master_Log_Pos: 98
Relay_Log_File: slave-relay.000002
Relay_Log_Pos: 235
Relay_Master_Log_File: mysql-bin.000010
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB: exampledb
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 98
Relay_Log_Space: 235
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.01 sec)
mysql>
Afterwards, you can leave the MySQL shell on server2:
quit
Now the replication from server1 to server2 is set up. Next we must configure replication from server2 to server3.