Installing The Galera-Iworx Cluster
Writer of this guide is Jos Jans (https://nl.linkedin.com/in/jgajans) , working at UP learning (http://uplearning.nl) a Dutch E-learning specialist. My thanks go to him for his time and dedication to this project.
This document describes the installation of the Galera/Interworx cluster. This cluster provides the default Load Balancing that is available within Interworx, and additionally MySQL loadbalancing through MySQL Galera Clustering.
Pre-requisites
This guide assumes that you have installed your server with CentOS 6.4 (or later), and that you have double NIC’s with one External connection (192.168.120.x), and the other in a private VLAN (172.20.0.x).
Install the Atomic & EPEL repositories
Since we need some additional packages, we have to add some repositories to the server installation.
yum -y install wget wget http://ftp.nluug.nl/pub/os/Linux/distr/fedora-epel/6/i386/epel-release-6-8.noarch.rpm rpm -Uvh epel-release-6-8.noarch.rpm wget -q -O - http://www.atomicorp.com/installers/atomic | sh
Add a line to the atomic repo (since we don't want to use the atomic repo for mysql)
nano /etc/yum.repos.d/atomic.repo
[atomic] exclude=mysql*
Monitoring tools
In this part we will setup, the local monitoring tools
yum -y install nano git iftop ntop htop mytop lynx screen gcc mutt innotop iotop mtr man perl-DBD-MySQL
Other packages
Ok, now we are able to monitor the server we want to install the real software:
Webserver programs
yum -y install httpd clamav mysql mysql-server mysql-devel php-common php-dom php-pear php-soap php-pdo php-mysql php-devel php-gd php-ldap php-mbstring php-intl php-mcrypt phpmyadmin php-xmlrpc php-cli php-iconv php-ctype php-tokenizer aspell php-xcache xcache-admin
Common programs (especially needed if you use iscsi for Interworx later on)
yum -y install iscsi-initiator-utils lsscsi device-mapper-multipath dstat nfs-utils nfs-utils-lib
Set hostname
On all of the servers we will add the hostnames to the /etc/hosts file
echo 192.168.120.1 master.hosting.local master >> /etc/hosts
echo 192.168.120.2 slave1.hosting.local slave1 >> /etc/hosts
echo 192.168.120.3 slave2.hosting.local slave2 >> /etc/hosts
echo 192.168.120.4 slave3.hosting.local slave3 >> /etc/hosts
echo 172.20.0.1 master >> /etc/hosts
echo 172.20.0.2 slave1 >> /etc/hosts
echo 172.20.0.3 slave2 >> /etc/hosts
echo 172.20.0.4 slave3 >> /etc/hosts
If this is done, make sure that hostname and hostname -f returns the same value!!
and hostname -i doesn’t give you 127.0.0.1 or 127.0.1.1.
hostname && hostname -f
SElinux & IPTables
Disable these services by running:
service iptables stop
setenforce 0
Edit the file /etc/sysconfig/selinux so it reads:
SELINUX=disabled
Configure ntpd
Since cluster services need the correct time we have to install the ntpd (timeserver deamon).
yum -y install ntp && chkconfig ntpd on
Let's put in some timeserver as well:
nano /etc/ntp.conf
server pool.ntp.org
Create ssh-keys (on each server)
Because all the servers should be able to communicate with each other we have to create on each server a ssh-key.
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Now add the created ssh public keys (/root/.ssh/id_rsa.pub) to all the servers in the file:
nano ~/.ssh/authorized_keys
If this step is completed try to logon to each server, and accept storage of the host key.
ssh master Yes
exit
ssh slave1 Yes
exit
ssh slave2 Yes
exit
ssh slave3 Yes
exit
Now restart the server:
shutdown -r now