How To Set Up An Active/Passive PostgreSQL Cluster With Pacemaker, Corosync, And DRBD (CentOS 5.5) - Page 3
This article explains how to set up (and monitor) an Active/Passive PostgreSQL Cluster, using Pacemaker with Corosync and DRBD. Prepared by Rafael Marangoni, from BRLink Servidor Linux Team.
6. Configuring Corosync (openAIS)
The Corosync project is a fork of the Heartbeat project, and like Pacemaker works very very fine with Corosync, we'll use it here.
node1:
To configure Corosync, let's get the actual configuration:
Only on node1, do:
export ais_port=4000
export ais_mcast=226.94.1.1
export ais_addr=`ip address show eth0 | grep "inet " | tail -n 1 | awk '{print $4}' | sed s/255/0/`
Then, we check the data:
env | grep ais_
Important: The variable ais_addr must contains the network address that the cluster will listen. In our article, this address is 10.0.0.0
Now we create the corosync config file:
cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
sed -i.gres "s/.*mcastaddr:.*/mcastaddr:\ $ais_mcast/g" /etc/corosync/corosync.conf
sed -i.gres "s/.*mcastport:.*/mcastport:\ $ais_port/g" /etc/corosync/corosync.conf
sed -i.gres "s/.*bindnetaddr:.*/bindnetaddr:\ $ais_addr/g" /etc/corosync/corosync.conf
Let's add some information to the file:
cat <<-END >>/etc/corosync/corosync.conf
aisexec {
user: root
group: root
}
END
cat <<-END >>/etc/corosync/corosync.conf
service {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 0
}
END
The /etc/corosync/corosync.conf file will looks like this:
compatibility: whitetank totem { version: 2 secauth: off threads: 0 interface { ringnumber: 0 bindnetaddr: 10.0.0.0 mcastaddr: 226.94.1.1 mcastport: 4000 } } logging { fileline: off to_stderr: yes to_logfile: yes to_syslog: yes logfile: /tmp/corosync.log debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } amf { mode: disabled } aisexec { user: root group: root } service { # Load the Pacemaker Cluster Resource Manager name: pacemaker ver: 0 }
From node1, we'll transfer the corosync config files to node2:
scp /etc/corosync/* node2:/etc/corosync/
both nodes:
On both nodes, we need to create the logs directory:
mkdir /var/log/cluster/
node1:
Afterwards, only on node1, start the corosync service:
/etc/init.d/corosync start
Let's check if the service is ok:
grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/messages
[root@node1 bin]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/messages
Apr 7 12:37:21 node1 corosync[23533]: [MAIN ] Corosync Cluster Engine ('1.2.0'): started and ready to provide service.
Apr 7 12:37:21 node1 corosync[23533]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
Let's check if corosync started on the right interface:
grep TOTEM /var/log/messages
[root@node1 bin]# grep TOTEM /var/log/messages
Apr 7 12:37:21 node1 corosync[23533]: [TOTEM ] Initializing transport (UDP/IP).
Apr 7 12:37:21 node1 corosync[23533]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Apr 7 12:37:21 node1 corosync[23533]: [TOTEM ] The network interface [10.0.0.191] is now up.
Apr 7 12:37:21 node1 corosync[23533]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Let's check if pacemaker is up:
grep pcmk_startup /var/log/messages
[root@node1 bin]# grep pcmk_startup /var/log/messages
Apr 7 12:37:21 node1 corosync[23533]: [pcmk ] info: pcmk_startup: CRM: Initialized
Apr 7 12:37:21 node1 corosync[23533]: [pcmk ] Logging: Initialized pcmk_startup
Apr 7 12:37:21 node1 corosync[23533]: [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
Apr 7 12:37:21 node1 corosync[23533]: [pcmk ] info: pcmk_startup: Service: 9
Apr 7 12:37:21 node1 corosync[23533]: [pcmk ] info: pcmk_startup: Local hostname: node1
Let's check if the corosync process is up:
ps axf
[root@node1 bin]# ps axf
(should contain something like this)
23533 ? Ssl 0:00 corosync
23539 ? SLs 0:00 \_ /usr/lib/heartbeat/stonithd
23540 ? S 0:00 \_ /usr/lib/heartbeat/cib
23541 ? S 0:00 \_ /usr/lib/heartbeat/lrmd
23542 ? S 0:00 \_ /usr/lib/heartbeat/attrd
23543 ? S 0:00 \_ /usr/lib/heartbeat/pengine
23544 ? S 0:00 \_ /usr/lib/heartbeat/crmd
node2:
Afterwards, if everything is ok on node1, then we can bring corosync up on node2:
/etc/init.d/corosync start
both nodes:
Now, we can check the status of the cluster. Running on any node, the following command:
crm_mon -1
[root@node1 ~]# crm_mon -1
============
Last updated: Fri Oct 29 17:44:36 2010
Stack: openais
Current DC: node1.clusterbr.int - partition with quorum
Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677
2 Nodes configured, 2 expected votes
0 Resources configured.
============
Online: [ node1.clusterbr.int node2.clusterbr.int ]
Be sure that the both nodes are up and showing as online
Set Corosync to automatic initialization (both nodes):
chkconfig --level 35 corosync on
7. Configuring Pacemaker
Pacemaker has many good features, one of them is that it replicates automatically the cluster configuration between the nodes. So the administrative tasks (like configuration) made on any node, are applyed to the entire cluster. Then every crm command used on this article can be used on any node, but only on time (do not repeat the command on more than one node).
Important commands to cluster managment
To check cluster configuration:
crm_verify -L
To list cluster status and return to command prompt:
crm_mon -1
To list cluster status and maintain on the status screen:
crm_mon -1
To list cluster configuration:
crm configure show
To list open crm console (quit to exit):
crm
Configuring Stonith
Stonith is a security feature of the cluster, that (among other things) "strongly" shutdown a cluster node that has problems. To do this well, it uses specific hardware. The first thing we need to do on cluster configuration is to configure or disable Stonith. On this article, we'll disable Stonith, but you can use it. If you want to know how to use it, take a look at: http://www.clusterlabs.org/doc/crm_fencing.html
First, checking the cluster configuration, we should get some errors from Stonith
crm_verify -L
So, to disable Stonith, we use the following command (on one of the nodes):
crm configure property stonith-enabled=false
Now, checking the cluster configuration, we should get no errors:
crm_verify -L
Cluster General Configuration
Run the commands once, on any node.
Configuring quorum to 2 nodes. For more information, look at pacemaker configuration.
crm configure property no-quorum-policy=ignore
Configuring weight to change the resource to another node.
When a node goes down and then goes up, this configuration makes the resource that is running on the another server be kept there (that was always up).
This is very good to prevent a sync problem on the node that was down, or prevent that the node that is flapping, flap the cluster services.
crm configure rsc_defaults resource-stickiness=100
Showing configuration:
crm configure show
[root@node1 ~]# crm configure show
node node1.clusterbr.int
node node2.clusterbr.int
property $id="cib-bootstrap-options" \
dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
Configuring DBIP
We need a cluster IP. To add configure it, execute:
crm configure primitive DBIP ocf:heartbeat:IPaddr2 \
params ip=10.0.0.190 cidr_netmask=24 \
op monitor interval=30s
Showing status:
[root@node1 ~]# crm_mon -1
============
Last updated: Fri Oct 29 17:47:53 2010
Stack: openais
Current DC: node1.clusterbr.int - partition with quorum
Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ node2.clusterbr.int node1.clusterbr.int ]
DBIP (ocf::heartbeat:IPaddr2): Started node2.clusterbr.int
Note that the cluster status shows where the resource is running. Here it's running on node2, but could be on node1.
Configuring DRBD on Cluster
Adding DRBD resource on cluster:
crm configure primitive drbd_postgres ocf:linbit:drbd \
params drbd_resource="postgres" \
op monitor interval="15s"
Configure the DRBD primary and secondary node:
crm configure ms ms_drbd_postgres drbd_postgres \
meta master-max="1" master-node-max="1" \
clone-max="2" clone-node-max="1" \
notify="true"
Configure the DRBD mounting filesystem (and mountpoint):
crm configure primitive postgres_fs ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/var/lib/pgsql" fstype="ext3"
Configuring PostgreSQL on Cluster
Adding the postgresql resource on cluster:
crm configure primitive postgresql ocf:heartbeat:pgsql \
op monitor depth="0" timeout="30" interval="30"
Now, we need to group DBIP, postgresql and DRBD mounted filesystem. The name of the group will be "postgres":
crm configure group postgres postgres_fs DBIP postgresql
Fixing group postgres to run together with DRBD Primary node
crm configure colocation postgres_on_drbd inf: postgres ms_drbd_postgres:Master
Configuring postgres to run after DRBD
crm configure order postgres_after_drbd inf: ms_drbd_postgres:promote postgres:start
Showing cluster configuration:
crm configure show
[root@node1 ~]# crm configure show
node node1.clusterbr.int
node node2.clusterbr.int
primitive DBIP ocf:heartbeat:IPaddr2 \
params ip="10.0.0.190" cidr_netmask="24" \
op monitor interval="30s"
primitive drbd_postgres ocf:linbit:drbd \
params drbd_resource="postgres" \
op monitor interval="15s"
primitive postgres_fs ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/var/lib/pgsql" fstype="ext3"
primitive postgresql ocf:heartbeat:pgsql \
op monitor interval="30" timeout="30" depth="0" \
meta target-role="Started"
group postgres postgres_fs DBIP postgresql \
meta target-role="Started"
ms ms_drbd_postgres drbd_postgres \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation postgres_on_drbd inf: postgres ms_drbd_postgres:Master
order postgres_after_drbd inf: ms_drbd_postgres:promote postgres:start
property $id="cib-bootstrap-options" \
dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
[root@node1 ~]#
Setting the Preferential Node
It's important to the pacemaker know where we prefer to run the services. To make node1 the preferential, use:
crm configure location master-prefer-node1 DBIP 50: node1.clusterbr.int
Note that the weight to prefer node1 is 50. Then if the service is running on node2, pacemaker will not change to node1
automatically, because we configured resource-stickiness to 100 (take a look up).
Then, even the node1 has resettled from downtime, the cluster will keep the services on node2.
Showing status:
crm_mon -1
[root@node2 ~]# crm_mon -1
============
Last updated: Fri Oct 29 19:54:09 2010
Stack: openais
Current DC: node2.clusterbr.int - partition with quorum
Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ node2.clusterbr.int node1.clusterbr.int ]
Master/Slave Set: ms_drbd_postgres
Masters: [ node2.clusterbr.int ]
Slaves: [ node1.clusterbr.int ]
Resource Group: postgres
postgres_fs (ocf::heartbeat:Filesystem): Started node2.clusterbr.int
DBIP (ocf::heartbeat:IPaddr2): Started node2.clusterbr.int
postgresql (ocf::heartbeat:pgsql): Started node2.clusterbr.int
You may get some errors on the status, then you must to reboot both the nodes to corosync complete configuration.
After reboots, you should connect to DBIP (10.0.0.190) on port TCP 5432 to postgres.
To test the cluster, you can poweroff the node or stop corosync service on it.
Cluster managment
These commands are very helpfull to manage the cluster.
To migrate a resource to other node, do:
crm resource migrate postgres node1.clusterbr.int
To remove the above migrate command, do:
crm resource unmigrate postgres
To clean resource messages, do:
crm resource cleanup postgres
To stop postgresql service on cluster, do:
crm resource stop postgresql
To start postgresql service on cluster, do:
crm resource start postgresql