Setting Up Master-Master Replication On Four Nodes With MySQL 5 On Debian Etch

Version 1.0
Author: Falko Timme

This tutorial explains how you can set up MySQL master-master replication on four MySQL nodes (running on Debian Etch). The difference to a two node master-master replication (which is explained here) is that if you have more than two nodes, the replication goes in a circle, i.e., with four nodes, the replication goes from node1 to node2, from node2 to node3, from node3 to node4, and from node4 to node1.

Since version 5, MySQL comes with built-in support for master-master replication, solving the problem that can happen with self-generated keys. In former MySQL versions, the problem with master-master replication was that conflicts arose immediately if node A and node B both inserted an auto-incrementing key on the same table. The advantages of master-master replication over the traditional master-slave replication are that you don't have to modify your applications to make write accesses only to the master, and that it is easier to provide high-availability because if the master fails, you still have the other master.

I do not issue any guarantee that this will work for you!


1 Preliminary Note

In this tutorial I will show how to replicate the database exampledb on four MySQL nodes:

  • IP address
  • IP address
  • IP address
  • IP address

Each node is a master and a slave at the same time. All four systems are running Debian Etch; however, the configuration should apply to almost all distributions with little or no modifications.

Replication will work in a circle, i.e., the replication goes from server1 to server2, from server2 to server3, from server3 to server4, and from server4 back to server1:

... --> server1 --> server2 --> server3 --> server4 --> server1 --> ... 


2 Installing MySQL 5.0

If MySQL 5.0 isn't already installed on server1 to server4, install it now:


apt-get install mysql-server-5.0 mysql-client-5.0

To make sure that the replication can work, we must make MySQL listen on all interfaces, therefore we comment out the line bind-address = in /etc/mysql/my.cnf:


vi /etc/mysql/my.cnf
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
#bind-address           =

Restart MySQL afterwards:


/etc/init.d/mysql restart

Then check with


netstat -tap | grep mysql

that MySQL is really listening on all interfaces:

server1:~# netstat -tap | grep mysql
tcp        0      0 *:mysql                 *:*                     LISTEN     2671/mysqld

Afterwards, set a MySQL password for the user ro[email protected]:


mysqladmin -u root password yourrootsqlpassword

Next we create MySQL passwords for [email protected], [email protected], [email protected], and [email protected]:


mysqladmin -h -u root password yourrootsqlpassword


mysqladmin -h -u root password yourrootsqlpassword


mysqladmin -h -u root password yourrootsqlpassword


mysqladmin -h -u root password yourrootsqlpassword

Now we set up a replication user slaveuser_for_s2 that can be used by server2 to access the MySQL database on server1:


mysql -u root -p

On the MySQL shell, run the following commands:

GRANT REPLICATION SLAVE ON *.* TO 'slaveuser_for_s2'@'%' IDENTIFIED BY 'slave_user_for_server2_password';

Then we set up a replication user slaveuser_for_s3 that can be used by server3 to access the MySQL database on server2...


mysql -u root -p

GRANT REPLICATION SLAVE ON *.* TO 'slaveuser_for_s3'@'%' IDENTIFIED BY 'slave_user_for_server3_password';

... and a replication user slaveuser_for_s4 that can be used by server4 to access the MySQL database on server3...


mysql -u root -p

GRANT REPLICATION SLAVE ON *.* TO 'slaveuser_for_s4'@'%' IDENTIFIED BY 'slave_user_for_server4_password';

... and finally a replication user slaveuser_for_s1 that can be used by server1 to access the MySQL database on server4:


mysql -u root -p

GRANT REPLICATION SLAVE ON *.* TO 'slaveuser_for_s1'@'%' IDENTIFIED BY 'slave_user_for_server1_password';


3 Some Notes

In the following I will assume that the database exampledb is already existing on server1, and that there are tables with records in it.

Before we start setting up the replication, we create an empty database exampledb on server2, server3, and server4:


mysql -u root -p


Share this page:

10 Comment(s)

Add comment

Please register in our forum first to comment.


By: koby

Great feedback.

What do you suggest?

I need a mechanism to sync between tables, I have a set of server, the user is moved between the server therefore, I need to keep identical copies of as many as 10 tables (out of 150~) in all databases, what else can I use (other then inventing the wheel) 



By: Sheeri K. Cabral

I am in full agreement with Keith, that master/master replication and circular replication spells trouble.  Which is the "authoritative" copy?

 Also, if you have a ring where replication is as follows:

 srv1 -> srv2 -> srv3 -> srv4 -> srv1

(server2 is a slave of server1, etc)

Then if one link in the chain breaks, no new data goes down the line.  For example, if srv3 is down, then changes happen like this:

 srv3->srv4->srv1->srv2 (OK, because srv3 already has the data)

 srv4->srv1->srv2 (Bad, srv3 doesn't get the change from srv4)

srv1->srv2 (Bad, srv3 and srv4 don't get the change from srv1, even though replication on srv4 is working just fine.

 srv2 (bad, no other machines get changes from srv2, even though replication is only broken on one machine).

By: Joe Dunnigan

The following article works on the issue of replication chain being broken if a node goes down:

To answer your first question, in this and any of the MySQL master-master tutorials on howtoforge, the first server is the authoritative server. None of the other servers will ever actually be contacted by the application, either to write or read data, unless the primary master goes down. There is little discussion so far on load balancing this configuration.

By: Keith Murphy


 I have seen your tutorials before. You always do a good job. However, I would say that setting up a ring of servers in this manner and writing to them all is a VERY bad idea. Replication simply isn't robust enough for it.Simple MM replication is bad enough and it isn't a good idea to be writing to both servers in that setup.Adding more servers just adds more points for brokeness.

 I have worked with customers who have this type of setup. Everythings great until something goes wrong and then you don't know what was written, where it was written or if it should have been written.

 If you want multiple servers to read off of then a simple master with multi slave setup will suffice.

 I is a VERY BAD IDEA to write to multiple servers and expect them to stay in sync!!


Keith Murphy



The way it is done is indeed a bad idea but Im sure its possible with a simple script for a slave server to detect a failure of its master then start replicating from another master.

Also im sure it would be possible to do something like a "raid 5" replication circle. From what I read on linux HA, heartbeat v2 supports multiple nodes so there could be a failover node that would take over if any node went down... Basically, that failover node would sync with others but would not be a master until another a live node would go down. Then it would takeover the IP of the failed node...

@ Keith : Simple master with slaves is not a super good idea either, the single master is also a point of failure. 2 way master-master with slaves would be a better idea.

By: robw

"I'm sure it's possible" is a world of difference from "it works solidly, lots of people do this every day".  Please listen to Sheeri and Keith, they know the truth, they bear the scars.

The situation has certainly improved in MySQL 5 with the auto_increment_* settings being a cool hack to get it working at all, but it's still too fragile unless in a tightly controlled environment, where the DBA can sit next to the application developer and keep them on the right path, help them to understand the limitations.

As a DBA, and that's who this how-to was written for, can you guarantee, for example, that the application will never try to write the same unique key to two different nodes?  How does the circle deal with a node failure?  How does the application deal with a node failure?

The answers in practice aren't as simple as one might think.  They're all quite nasty, actually.

We ultimately have to trust that the coders understand the design weaknesses and have a strict enough coding practice to code around them every time.  Much easier said than done, especially when even DBAs struggle to understand the problem.


-- Rob

By: Andi

WHY do you repeat all the steps for every server instead of giving 1 abstract howto for every server?

It would be better, if you gave an overview about the abstract concepts behind MySQL server.

Although, thank you for the nice howto.

By: Jawi


when i put those:

replicate-same-server-id = 0 auto-increment-increment = 4 auto-increment-offset = 1 master-host = master-user = slaveuser_for_s1 master-password = slave_user_for_server1_password master-connect-retry = 60 replicate-do-db = exampledbthe sql restarting makes an error !!

By: Jawi

the error message is saying :

job for mysql.service failed because the control process exited with error code . see ''systemctl status mysql.service'' and ''journalctl -xe'' for details.

By: Norbert63

Hi Falko,

big thanks for the great tutorial. Would be nice if you add a chapter how to restart after any kind of desaster that could happen. How to get this nice circle in a sync again / working again if for example server 3 needs a software update or more serious a reboot.

I am wondering if that is a good solution for 3 - 4 servers distributed around the world. Europe USA Asia ...