How To Set Up An Active/Passive PostgreSQL Cluster With Pacemaker, Corosync, And DRBD (CentOS 5.5) - Page 4

This article explains how to set up (and monitor) an Active/Passive PostgreSQL Cluster, using Pacemaker with Corosync and DRBD. Prepared by Rafael Marangoni, from BRLink Servidor Linux Team.

8. Creating Webpage to show status

This configuration is very usefull to quickly check the cluster status. Must be runned on both nodes.

Start apache (if it's not running):

/etc/init.d/httpd start
chkconfig --level 35 httpd on

Create a cluster directory (under the DocumentRoot):

mkdir /var/www/html/cluster/

To generate the html, do:

crm_mon --daemonize --as-html /var/www/html/cluster/index.html

Let's put on rc.local to automatically run on startup:

 echo "crm_mon --daemonize --as-html /var/www/html/cluster/index.html" >> /etc/rc.d/rc.local 

To access, points your browser to http://10.0.0.190/cluster

 

9. Installing phppgadmin to manage postgres

It's very simple. Remenber to do this on both nodes:

First, download it:

mkdir /download
cd /download
wget 'http://downloads.sourceforge.net/project/phppgadmin/phpPgAdmin%20%5Bbeta%5D/phpPgAdmin-5.0/phpPgAdmin-5.0-beta2.tar.bz2?r=http%3A%2F%2Fphppgadmin.sourceforge.net%2F%3Fpage%3Ddownload&ts=1288189530&use_mirror=ufpr'

Then, install:

tar -jxvf phpPgAdmin-5.0-beta2.tar.bz2
mv phpPgAdmin-5.0-beta2 /var/www/html/cluster-pgadmin
chown apache.apache -R /var/www/html/cluster-pgadmin

To access, points you browser to: http://10.0.0.190/cluster-pgadmin
PS: Use admpgsql as user and the password that you configured to logon

 

10. Acessing from network

If you need to access postgres from LAN, don't forget to configure authentication on postgres:

Here, we'll set md5 authentication to network 10.0.0.0/24. In the node that postgresql in running, do:

echo "host all all 10.0.0.0/24 md5">> /var/lib/pgsql/data/pg_hba.conf

Then, restart postgres to reload configuration:

crm resource stop postgresql
crm resource start postgresql

 

11. Monitoring

Cluster monitoring is mandatory on production scenarios. To make this work with Zabbix, we suggests that you install Zabbix agent on every node.
Then monitor on every node, these items::

  •   Check Local Ping (10.0.0.191, 10.0.0.192 and 172.16.0.1, 172.16.0.2)
  •   Check DBIP (Cluster IP) 10.0.0.190
  •   Check Postgres TCP Port (5432) on DBIP 10.0.0.190
  •   General checks, like disk use, memory, processor
  •   Use the following (and very simple) script called monitor_drbd.sh (that returns 1, when everything is ok, and 0 when got problems.)

Here goes the monitor_drbd.sh to use with Zabbix:

#!/bin/bash

CHECK=`cat /proc/drbd | grep UpToDate/UpToDate | cut -d: -f5 | cut -c1-17`
STRING_OK="UpToDate/UpToDate"

# Comparando as duas.
if [ "$CHECK" == "$STRING_OK" ] ; then
	# Is ok, returning 1
	echo 1;
else
	# Not ok, returning 0
	echo 0;
fi   

 

References

Pacemaker and DRBD: http://www.drbd.org/users-guide/s-pacemaker-crm-drbd-backed-service.html
DRBD User Guide: http://www.drbd.org/users-guide/s-pacemaker-crm.html
Pacemaker: http://www.clusterlabs.org/wiki/Main_Page
HowTO pacemaker: http://www.clusterlabs.org/mwiki/images/5/56/Cluster_from_Scratch_-_Fedora_12.pdf
Corosync: http://corosync.org/doku.php

Share this page:

Suggested articles

2 Comment(s)

Add comment

Comments

By: Jeff

I don't think the configuration you gave will provide any high availability for network failures. You mentioned STONITH, but said you could do this without it. In this configuration, if you pull the cable for eth0 from node1, then node2 would not take over. DRBD would prevent node2 from becoming the primary. Do you have any recommendations for that scenario without a STONITH device?

By: Chris

Sure you can, but it all depends on your network infrastructure.  Use redundant switches and nic bonding for both the lan and the crossover and you're set.  I typically create a replication vlan with bonding in this scenario.  The down side is that you need 4 nics per box but the general config outlined in this document will work.  I also built one in a vmware (vsphere cluster environment in my case) and achieved HA that way.  Both ways have worked well for me.  This is really just a reference guide.  If you're a little creative and make adjustments to suit the details of your environment you'll find a way.  I promise.  I've even managed to use this as a general guide for achieving the same cluster mechanism for mysql.