How To Set Up An Active/Passive PostgreSQL Cluster With Pacemaker, Corosync, And DRBD (CentOS 5.5) - Page 2
This article explains how to set up (and monitor) an Active/Passive PostgreSQL Cluster, using Pacemaker with Corosync and DRBD. Prepared by Rafael Marangoni, from BRLink Servidor Linux Team.
4. Configuring DRBD
First, we need to configure /etc/drbd.conf on both nodes:
vi /etc/drbd.conf
global { usage-count no; } common { syncer { rate 100M; } protocol C; } resource postgres { startup { wfc-timeout 0; degr-wfc-timeout 120; } disk { on-io-error detach; } on node1.clusterbr.int { device /dev/drbd0; disk /dev/sdb; address 172.16.0.1:7791; meta-disk internal; } on node2.clusterbr.int { device /dev/drbd0; disk /dev/sdb; address 172.16.0.2:7791; meta-disk internal; } }
The main points of configuration are:
resource: refers to the resource that will be manage by DRBD, note that we called "postgres"
disk: refers to the device that DRBD will use (a disk or partition)
address: IP Address/port that DRBD will use (note that we points to cross-over interfaces)
syncer: the rate transfer between the nodes (we use 100M because we have Gigabit cards)
If you have doubts, please look at DRBD Users Guide: www.drbd.org/users-guide-emb/
Afterwards of that configuration, we can create the metadata on postgres resource. On both nodes do:
drbdadm create-md postgres
node1:
drbdadm create-md postgres
[root@node1 ~]# drbdadm create-md postgres
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
node2:
drbdadm create-md postgres
[root@node2 ~]# drbdadm create-md postgres
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
Next, we need to put the resource up, connecting to it. Again, on both nodes do:
drbdadm up postgres
Now we can make the initial sync between the nodes. This needs to be done only on the primary node, here we choose the node1.
Then, only on node1:
drbdadm -- --overwrite-data-of-peer primary postgres
To check the progress of the sync, and status of DRBD resource, look at /proc/drbd:
cat /proc/drbd
[root@node1 ~]# cat /proc/drbd
version: 8.3.8 (api:88/proto:86-94)
GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:09
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----
ns:48128 nr:0 dw:0 dr:48128 al:0 bm:2 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:8340188
[>....................] sync'ed: 0.6% (8144/8188)M delay_probe: 7
finish: 0:11:29 speed: 12,032 (12,032) K/sec
Now, we need to wait the end of the sync. This may take a long time, depends the size and performance of your disks. And of course, the speed of cluster network interfaces that is being used with the cross-over cable).
When the sync process ends, we can take a look at the status of the resource postgres:
node1:
cat /proc/drbd
[root@node1 ~]# cat /proc/drbd
version: 8.3.8 (api:88/proto:86-94)
GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:09
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----
ns:8388316 nr:0 dw:0 dr:8388316 al:0 bm:512 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
node2:
cat /proc/drbd
[root@node2 ~]# cat /proc/drbd
version: 8.3.8 (api:88/proto:86-94)
GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:09
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----
ns:0 nr:8388316 dw:8388316 dr:0 al:0 bm:512 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
To learn what means all the status information, take a look at: www.drbd.org/users-guide-emb/ch-admin.html#s-proc-drbd
5. Configuring PostgreSQL
First, need to initiate the DRBD services, to create the initdb. On both nodes, do:
/etc/init.d/drbd start
As we choosed before, node1 will be the primary. To make sure, on node1:
cat /proc/drbd
[root@node1 ~]# cat /proc/drbd
version: 8.3.8 (api:88/proto:86-94)
GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:09
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----
ns:8388316 nr:0 dw:0 dr:8388316 al:0 bm:512 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
The Primary/Secondary info, means that the local server is the Primary, and the other one is Secondary.
The UpToDate/UpToDate info, means that the resource is UptoDate on both nodes.
Next, we need to format the DRBD device. Here we choosed ext3 as the filesystem. Only on node1, do:
mkfs.ext3 /dev/drbd0
Afterwards, we can mount the device. The mountpoint that we use is the default PostgreSQL location (on Redhat based systems):
Only on node1, do:
mount -t ext3 /dev/drbd0 /var/lib/pgsql
Next, we change the owner and group of the mountpoint.
Only on node1, do:
chown postgres.postgres /var/lib/pgsql
Now, we need to initiate the postgresql database:
Only on node1, do:
su - postgres
initdb /var/lib/pgsql/data
exit
I prefer to enable trusted authentication on the nodes and cluster IP's.
Only on node1, do:
echo "host all all 10.0.0.191/32 trust" >> /var/lib/pgsql/data/pg_hba.conf
echo "host all all 10.0.0.192/32 trust" >> /var/lib/pgsql/data/pg_hba.conf
echo "host all all 10.0.0.190/32 trust" >> /var/lib/pgsql/data/pg_hba.conf
Other config that we need to do, it's to enable PostgreSQL to listen on all interfaces.
Only on node1, do:
vi /var/lib/pgsql/data/postgresql.conf
Uncomment and change only the line:
listen_addresses = '0.0.0.0'
Now, we start postgres.
Only on node1, do:
/etc/init.d/postgresql start
Then, we can create an admin user to manage postgresql:
Only on node1, do:
su - postgres
createuser --superuser admpgsql --pwprompt
You'll need to set a password to admpgsql.
Afterwards, we create a database and populate it with pgbench.
Only on node1, do:
su - postgres
createdb pgbench
pgbench -i pgbench
Pgbench populates the db with some info, the objetive is to test postgresql/
pgbench -i pgbench
-bash-3.2$ pgbench -i pgbench
NOTA: tabela "pgbench_branches" não existe, ignorando
NOTA: tabela "pgbench_tellers" não existe, ignorando
NOTA: tabela "pgbench_accounts" não existe, ignorando
NOTA: tabela "pgbench_history" não existe, ignorando
creating tables...
10000 tuples done.
20000 tuples done.
30000 tuples done.
40000 tuples done.
50000 tuples done.
60000 tuples done.
70000 tuples done.
80000 tuples done.
90000 tuples done.
100000 tuples done.
set primary key...
NOTA: ALTER TABLE / ADD PRIMARY KEY criará Ãndice implÃcito "pgbench_branches_pkey" na tabela "pgbench_branches"
NOTA: ALTER TABLE / ADD PRIMARY KEY criará Ãndice implÃcito "pgbench_tellers_pkey" na tabela "pgbench_tellers"
NOTA: ALTER TABLE / ADD PRIMARY KEY criará Ãndice implÃcito "pgbench_accounts_pkey" na tabela "pgbench_accounts"
vacuum...done.
Now, we'll access the database to check if everything is ok:
Only on node1, do:
psql -U admpgsql -d pgbench
select * from pgbench_tellers;
psql -U admpgsql -d pgbench
psql (8.4.5)
Digite "help" para ajuda.
pgbench=# select * from pgbench_tellers;
tid | bid | tbalance | filler
-----+-----+----------+--------
1 | 1 | 0 |
2 | 1 | 0 |
3 | 1 | 0 |
4 | 1 | 0 |
5 | 1 | 0 |
6 | 1 | 0 |
7 | 1 | 0 |
8 | 1 | 0 |
9 | 1 | 0 |
10 | 1 | 0 |
(10 registros)
Afterwards, all postgres config is done.
Checking if PostgreSQL will work on node2
Before we start to manage the services with Pacemaker, it's better to test if postgres will work on node2.
node1:
First, on node1, we need to stop postgresql:
/etc/init.d/postgresql stop
Then, we umount the DRBD device:
umount /dev/drbd0
Now, we need to put node1 as Secondary on DRBD resource:
drbdadm secondary postgres
node2:
First, on node2, we need to promote node2 as Primary on DRBD resource:
drbdadm primary postgres
Then, we mount the DRBD device:
mount -t ext3 /dev/drbd0 /var/lib/pgsql/
Finally, we start postgresql
/etc/init.d/postgresql start
Let's check if we can access the pgbench db on node2:
psql -U admpgsql -d pgbench
select * from pgbench_tellers;
[root@node2 ~]# psql -U admpgsql -d pgbench
psql (8.4.5)
Digite "help" para ajuda.
pgbench=# select * from pgbench_tellers;
tid | bid | tbalance | filler
-----+-----+----------+--------
1 | 1 | 0 |
2 | 1 | 0 |
3 | 1 | 0 |
4 | 1 | 0 |
5 | 1 | 0 |
6 | 1 | 0 |
7 | 1 | 0 |
8 | 1 | 0 |
9 | 1 | 0 |
10 | 1 | 0 |
(10 registros)
Now, that everything is ok, we should stop all the services, to initiate the cluster config:
node2:
/etc/init.d/postgresql stop
umount /dev/drbd0
drbdadm secondary postgres
/etc/init.d/drbd stop
node1:
drbdadm primary postgres
/etc/init.d/drbd stop
We need to ensure that all the services are disabled at the initialization.
On both nodes, do:
chkconfig --level 35 drbd off
chkconfig --level 35 postgresql off