Setting Up Master-Master Replication On Four Nodes With MySQL 5 On Debian Etch - Page 3

4.2 Setting Up Replication On server3

(This chapter is for server3 only!)

On server3, we can now import the SQL dump snapshot.sql like this:

server3:

/usr/bin/mysqladmin --user=root --password=yourrootsqlpassword stop-slave
cd /tmp
mysql -u root -pyourrootsqlpassword exampledb < snapshot.sql

Afterwards, we must find out about the master status of server3 as well and write it down:

mysql -u root -p

USE exampledb;
FLUSH TABLES WITH READ LOCK;

SHOW MASTER STATUS;

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000010 |     1067 | exampledb    |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

mysql>

Then unlock the tables:

UNLOCK TABLES;

and run the following command to make server3 a slave of server2 (it is important that you replace the values in the following command with the values you got from the SHOW MASTER STATUS; command that we ran on server2!):

CHANGE MASTER TO MASTER_HOST='192.168.0.101', MASTER_USER='slaveuser_for_s3', MASTER_PASSWORD='slave_user_for_server3_password', MASTER_LOG_FILE='mysql-bin.000010', MASTER_LOG_POS=1067;

Finally start the slave:

START SLAVE;

Then check the slave status:

SHOW SLAVE STATUS \G

It is important that both Slave_IO_Running and Slave_SQL_Running have the value Yes in the output (otherwise something went wrong, and you should check your setup again and take a look at /var/log/syslog to find out about any errors):

mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.101
                Master_User: slaveuser_for_s3
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.000010
        Read_Master_Log_Pos: 1067
             Relay_Log_File: slave-relay.000002
              Relay_Log_Pos: 235
      Relay_Master_Log_File: mysql-bin.000010
           Slave_IO_Running: Yes
          Slave_SQL_Running: Yes
            Replicate_Do_DB: exampledb
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 0
                 Last_Error:
               Skip_Counter: 0
        Exec_Master_Log_Pos: 1067
            Relay_Log_Space: 235
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: 0
1 row in set (0.00 sec)

mysql>

Afterwards, you can leave the MySQL shell on server3:

quit

Now the replication from server2 to server3 is set up. Next we must configure replication from server3 to server4.

 

4.3 Setting Up Replication On server4

(This chapter is for server4 only!)

On server4, we can now import the SQL dump snapshot.sql like this:

server4:

/usr/bin/mysqladmin --user=root --password=yourrootsqlpassword stop-slave
cd /tmp
mysql -u root -pyourrootsqlpassword exampledb < snapshot.sql

Afterwards, we must find out about the master status of server4 as well and write it down:

mysql -u root -p

USE exampledb;
FLUSH TABLES WITH READ LOCK;

SHOW MASTER STATUS;

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000010 |     1067 | exampledb    |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

mysql>

Then unlock the tables:

UNLOCK TABLES;

and run the following command to make server4 a slave of server3 (it is important that you replace the values in the following command with the values you got from the SHOW MASTER STATUS; command that we ran on server3!):

CHANGE MASTER TO MASTER_HOST='192.168.0.102', MASTER_USER='slaveuser_for_s4', MASTER_PASSWORD='slave_user_for_server4_password', MASTER_LOG_FILE='mysql-bin.000010', MASTER_LOG_POS=1067;

Finally start the slave:

START SLAVE;

Then check the slave status:

SHOW SLAVE STATUS \G

It is important that both Slave_IO_Running and Slave_SQL_Running have the value Yes in the output (otherwise something went wrong, and you should check your setup again and take a look at /var/log/syslog to find out about any errors):

mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.102
                Master_User: slaveuser_for_s4
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.000010
        Read_Master_Log_Pos: 1067
             Relay_Log_File: slave-relay.000002
              Relay_Log_Pos: 235
      Relay_Master_Log_File: mysql-bin.000010
           Slave_IO_Running: Yes
          Slave_SQL_Running: Yes
            Replicate_Do_DB: exampledb
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 0
                 Last_Error:
               Skip_Counter: 0
        Exec_Master_Log_Pos: 1067
            Relay_Log_Space: 235
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: 0
1 row in set (0.00 sec)

mysql>

Afterwards, you can leave the MySQL shell on server3:

quit

Now the replication from server3 to server4 is set up. Finally we must configure replication from server4 to server1 to close the replication circle.

 

4.4 Setting Up Replication On server1

(This chapter is for server1 only!)

To do this, we stop the slave on server1 and make it a slave of server4:

server1:

mysql -u root -p
STOP SLAVE;

Make sure that you use the values of the SHOW MASTER STATUS; command that you ran on server4 in the following command:

CHANGE MASTER TO MASTER_HOST='192.168.0.103', MASTER_USER='slaveuser_for_s1', MASTER_PASSWORD='slave_user_for_server1_password', MASTER_LOG_FILE='mysql-bin.000010', MASTER_LOG_POS=1067;

Then start the slave on server1:

START SLAVE;

Then check the slave status:

SHOW SLAVE STATUS \G

It is important that both Slave_IO_Running and Slave_SQL_Running have the value Yes in the output (otherwise something went wrong, and you should check your setup again and take a look at /var/log/syslog to find out about any errors):

mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.103
                Master_User: slaveuser_for_s1
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.000010
        Read_Master_Log_Pos: 1067
             Relay_Log_File: slave-relay.000002
              Relay_Log_Pos: 235
      Relay_Master_Log_File: mysql-bin.000010
           Slave_IO_Running: Yes
          Slave_SQL_Running: Yes
            Replicate_Do_DB: exampledb
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 0
                 Last_Error:
               Skip_Counter: 0
        Exec_Master_Log_Pos: 1067
            Relay_Log_Space: 235
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: 0
1 row in set (0.00 sec)

mysql>

Afterwards you can leave the MySQL shell:

quit

If nothing went wrong, MySQL master-master replication should now be working. If it isn't, please check /var/log/syslog for MySQL errors.

 

Share this page:

5 Comment(s)

Add comment

Please register in our forum first to comment.

Comments

By:

"...there is absolutely nothing to gain from writing to more than one of the servers at a time." Actually I think there is (and I have been using it for more than a year now,) if your application is designed for HA then you probably use some kind of MySQL proxy system or other app fail-over mechanism. Then you will be able to write to any cluster node. For example: If a node of the circular multi-master cluster fails then your app will continue to run -and write data! Human intervention -best in my opinion- or very complex automated stored procedure methods exist to recover -or at least reconnect a smaller circle of the full cluster. Meanwhile you app is still running on a subset of the cluster :). Caveat: You probably should use other multiple backup systems of your MySQL data. And please, your apps should be able to handle failure correctly. This also means that your app should not rely on the MySQL cluster for mission critical roles, or -shudder- life or property critical roles!

By: Anonymous

It is best not to tell people how to do this or give them ideas about it.  The auto-increment settings solve nothing.  They just make it possible for you to fail to notice how badly messed up your data is getting.  In addition, there is absolutely nothing to gain from writing to more than one of the servers at a time.  And there are many edge cases when things can go horribly wrong.

I advise readers to get a copy of High Performance MySQL, Second Edition and read the replication chapter twice.

By: Anonymous

I'm sorry but mysql replication is quite good these days.  I have been using master <-> master replication for quite some time without issue.

By: Andy Haveland-Robinson

I've been using this kind of replication for a few years, experienced and recovered from a lot of these disasters. Here is some advice based on my experience:

  1. Never ignore replication errors, unless you are really sure. They can be skipped on the slave with this command:
    mysql -u root -ppassword -BEse 'set global sql_slave_skip_counter=1;start slave sql_thread'

  2. Another useful command:
    watch -n10 'mysql -u root -ppassword -BEse "show slave status"'

  3. I tried mysql-proxy, but if there are a lot of users writing data, there are risks that it selects a server that hasn't yet reached a required update, and replication could fail with foreign key errors, Not Good! I never could get mysql-proxy's rw-splitting script to work properly using utf8, and it didn't keep track of current databases properly. After years of being in development I still found it unusable for even amateur use.
    It seems to be a holy grail, to be able to abstract the database layer from the application, but I think it is just a dream. The proxy has to know more than just about connections, and should know more than just about read or write.
    It should make decisions based on sessions and users and assign a server to a user, and stick to it for the session, or until a failover, in which case it should be reassigned to another server after a reasonable delay to allow propagation.
    Mysql-proxy can't tie a user to a server because it doesn't know about users, only connections and some sql parsing, so I found it better to move this decision making to the web application layer, and write my own affinity/failover load balancer to select a free server from a rack of servers over ssh tunnels. In this way, users don't randomly get logged out or see inconsistent pages and think that they have to update again what they thought they just updated!
    If affinity is user-orientated, then on a social network site for example, other users' activities become invisible, but they can each continue to use the server they have been assigned without any problems. When replication resumes hopefully within a few seconds, everyone will receive the pending updates.

  4. Triggers can be difficult, so thoroughly debug on a test platform before going live and hit it hard with something like siege.

  5. If you use a cluster, always add a spare slave to take point-in-time backups. If any of the servers databases acquire an inconsistency, it is likely that this slave will be the most complete.

  6. Restoring a backup and resynchronising can be inconvenient, especially if there are many gigabytes of data, and the simplest approach is to go offline for maintenance, and reload from latest backup. Allowing reloading to replicate is the simplest way, but wasteful with a big database.
    In this case I would recommend failing over to a good master or masters (after reconfiguring the ring), stopping the slaves, backing up a good slave, gzip, send to each server, ungzip and use SET SQL_LOG_BIN=0; before reloading. Once loaded, all the slaves will be at the same state as the nearest slave to the working master(s). This can also be scripted. They can be reset to reclaim log space and re-added into the cluster. Paying attention to the state of SQL_LOG_BIN is paramount when doing this!!!

  7. It is possible to use successfully. Facebook proves this with thousands of mysql servers.
    I run a webserver using circular replication - two of the db servers are at home connected over permanent SSH tunnels to save on colocation costs as the bandwidth requirements are minimal. I wrote my own tools to reshape a cluster on-the-fly, and can add and remove servers within seconds. Theoretically, the php application could reshape the cluster on demand, but at present it just selects another working database and updates a file of port addresses in /dev/shm to notify other php sessions that a server became offline.
    Next enhancement would be to use memcached instead, but I'm still working on proof of concept.

Managing a cluster is a huge responsibility — one slip it could be Game Over!

Andy.

By:

Should a server go down, how does the replication continue? As this is going round in a circle, each server getting updates from the previous, there will be a break in the sequence, replication will not continue. How do you overcome this?