The Perfect Load-Balanced & High-Availability Web Cluster With 2 Servers Running Xen On Ubuntu 8.04 Hardy Heron - Page 4
10. MySQL replication (web1, web2)
10.1 Mysql Installing MySQL 5.0
apt-get install mysql-server-5.0 mysql-client-5.0
To make sure that the replication can work, we must make MySQL listen on all interfaces, therefore we comment out the line bind-address = 127.0.0.1 in /etc/mysql/my.cnf :
vi /etc/mysql/my.cnf
[...] # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 [...]
Restart MySQL afterwards :
/etc/init.d/mysql restart
On web1
Now we set up a replication user slave2_user that can be used by web2.example.com to access the MySQL database :
mysql -u root -p
On the MySQL shell, run the following commands :
GRANT REPLICATION SLAVE ON *.* TO 'slave2_user'@'%' IDENTIFIED BY 'slave2_password';
FLUSH PRIVILEGES;
quit;
On web2
Now we do the last two steps again on web2.example.com :
mysql -u root -p
GRANT REPLICATION SLAVE ON *.* TO 'slave1_user'@'%' IDENTIFIED BY 'slave1_password';
FLUSH PRIVILEGES;
quit;
On web1 AND web2
We will now create a database that will be used later for the mail server :
mysqladmin -u root -p create mail
Next, we go to the MySQL shell:
mysql -u root -p
On the MySQL shell, we create the user mail_admin with the passwort mail_admin_password (replace it with your own password) who has SELECT,INSERT,UPDATE,DELETE privileges on the mail database. This user will be used by Postfix and Courier to connect to the mail database:
GRANT SELECT, INSERT, UPDATE, DELETE ON mail.* TO 'mail_admin'@'localhost' IDENTIFIED BY 'mail_admin_password';
GRANT SELECT, INSERT, UPDATE, DELETE ON mail.* TO 'mail_admin'@'localhost.localdomain' IDENTIFIED BY 'mail_admin_password';
FLUSH PRIVILEGES;
quit;
10.2 Setting Up Replication
Now we set up master-master replication in /etc/mysql/my.cnf. The crucial configuration options for master-master replication are auto_increment_increment and auto_increment_offset:
- auto_increment_increment controls the increment between successive AUTO_INCREMENT values.
- auto_increment_offset determines the starting point for AUTO_INCREMENT column values.
Let's assume we have N MySQL nodes (N=2 in this example), then auto_increment_increment has the value N on all nodes, and each node must have a different value for auto_increment_offset (1, 2, ..., N).
Now let's configure our two MySQL nodes:
On web1
vi /etc/mysql/my.cnf
add the following lines right below "[mysqld]":
server-id = 1 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 1 #use 192.168.1.105 if you didnt install a crossover cable on eth1 master-host = 192.168.0.105 master-user = slave1_user master-password = slave1_password master-connect-retry = 60 replicate-do-db = mail log-bin = /var/log/mysql/mysql-bin.log binlog-do-db = mail relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index
and modify the line "max_binlog_size" :
[...] max_binlog_size = 500M [...]
/etc/init.d/mysql restart
On web2
vi /etc/mysql/my.cnf
server-id = 2 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 2 #use 192.168.1.104 if you didnt install a crossover cable on eth1 master-host = 192.168.0.104 master-user = slave2_user master-password = slave2_password master-connect-retry = 60 replicate-do-db = mail log-bin = /var/log/mysql/mysql-bin.log binlog-do-db = mail relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index
and modify the line "max_binlog_size" :
[...] max_binlog_size = 500M [...]
/etc/init.d/mysql restart
On web1 AND web2
Now we will start replication :
mysql -u root -p
On the MySQL shell, run the following commands:
reset master;
stop slave;
reset slave;
Now run the following :
On web1
CHANGE MASTER TO MASTER_HOST='192.168.0.105', MASTER_USER='slave1_user', MASTER_PASSWORD='slave1_password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98;
start slave;
quit;
On web2
CHANGE MASTER TO MASTER_HOST='192.168.0.104', MASTER_USER='slave2_user', MASTER_PASSWORD='slave2_password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98;
start slave;
quit;
Now replication should work.
10.3 Testing replication
On web1 AND web2
mysql -u root -p
On the MySQL shell, run the following commands:
show slave status \G;
There are 3 important lines in the output that should look like this :
[...] Slave_IO_State: Waiting for master to send event [...] Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: mail [...]
Now you can quit mysql :
quit;
On web1
We will insert some data on web1.example.com for testing and that will serve in the next chapter for mail :
mysql -u root -p
On the MySQL shell, run the following commands:
use mail;
We will create the following tables :
CREATE TABLE domains (
domain varchar(50) NOT NULL,
transport varchar(128) NOT NULL default 'smtp:[192.168.1.104]',
PRIMARY KEY (domain) )
TYPE=MyISAM;
CREATE TABLE forwardings (
source varchar(80) NOT NULL,
destination TEXT NOT NULL,
PRIMARY KEY (source) )
TYPE=MyISAM;
CREATE TABLE users (
email varchar(80) NOT NULL,
password varchar(20) NOT NULL,
quota INT(10) DEFAULT '10485760',
PRIMARY KEY (email)
) TYPE=MyISAM;
CREATE TABLE transport (
domain varchar(128) NOT NULL default '',
transport varchar(128) NOT NULL default '',
UNIQUE KEY domain (domain)
) TYPE=MyISAM;
quit;
These freshly create tables should appear on web2 mail database as well, thanks to replication.
On web2
Now we will verify that :
mysql -u root -p
On the MySQL shell, run the following commands:
use mail;
show tables;
The output should be :
+----------------+ | Tables_in_mail | +----------------+ | domains | | forwardings | | transport | | users | +----------------+ 4 rows in set (0.00 sec)
quit;
If you see that replication is working.
10.4 Creating user for ldirectord
We will now create the user that will connect to the database in the ldirectord.php file.
On web1 AND web2
mysql -u root -p
On the MySQL shell, run the following commands:
GRANT USAGE ON * . * TO 'ldirectord'@'localhost' IDENTIFIED BY 'LDIRECTORD_PASSWORD';
quit;
Now when you go with your browser at addresses :
http://192.168.1.104/ldirectord.php
and
http://192.168.1.105/ldirectord.php
You should see :
Connected to MySQL
displayed on the screen.