Setting Up An Active/Active Samba CTDB Cluster Using GFS & DRBD (CentOS 5.5) - Page 2
This article explains how to set up an Active/Active Samba CTDB Cluster, using GFS and DRBD. Prepared by Rafael Marangoni, from BRLink Servidor Linux Team.
3. Installing Prerequisites And Cluster Packages
There are some packages that need to be installed before:
yum -y install drbd82 kmod-drbd82 samba
Let's install Red Hat Cluster Suite:
yum -y groupinstall "Cluster Storage" "Clustering"
4. Configuring DRBD
First, we need to configure /etc/drbd.conf on both nodes:
vi /etc/drbd.conf
global { usage-count yes; } common { syncer { rate 100M; al-extents 257; } } resource r0 { protocol C; startup { become-primary-on both; ### For Primary/Primary ### degr-wfc-timeout 60; wfc-timeout 30; } disk { on-io-error detach; } net { allow-two-primaries; ### For Primary/Primary ### cram-hmac-alg sha1; shared-secret "mysecret"; after-sb-0pri discard-zero-changes; after-sb-1pri violently-as0p; after-sb-2pri violently-as0p; } on node1.clustersmb.int { device /dev/drbd0; disk /dev/sdb; address 172.16.0.1:7788; meta-disk internal; } on node2.clustersmb.int { device /dev/drbd0; disk /dev/sdb; address 172.16.0.1:7788; meta-disk internal; } }
The main points of configuration are:
resource: refers to the resource that will be manage by DRBD, note that we called "r0"
disk: refers to the device that DRBD will use (a disk or partition)
address: IP Address/port that DRBD will use (note that we points to cross-over interfaces)
syncer: the rate transfer between the nodes (we use 100M because we have Gigabit cards)
If you have doubts, please look at DRBD Users Guide: www.drbd.org/users-guide-emb/
Afterwards of that configuration, we can create the metadata on r0 resource. On both nodes do:
drbdadm create-md r0
Next, need to initiate the DRBD services, to create the initdb. On both nodes (at almost the same time), do:
/etc/init.d/drbd start
To put the two nodes as primary, on both nodes do:
drbdsetup /dev/drbd0 primary -o
To check the progress of the sync, and status of DRBD resource, look at /proc/drbd:
cat /proc/drbd
Now, we need to wait the end of the sync. This may take a long time, depends the size and performance of your disks. And of course, the speed of cluster network interfaces that is being used with the cross-over cable).
When the sync process ends, we can take a look at the status of the resource r0:
node1:
[root@node1 ~]# cat /proc/drbd
[root@node1 ~]# cat /proc/drbd
version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by buildsvn@c5-x8664-build, 2008-10-03 11:30:17
0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r---
ns:2097052 nr:0 dw:0 dr:2097052 al:0 bm:128 lo:0 pe:0 ua:0 ap:0 oos:0
node2:
[root@node2 ~]# cat /proc/drbd
[root@node2 ~]# cat /proc/drbd
version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by buildsvn@c5-x8664-build, 2008-10-03 11:30:17
0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r---
ns:0 nr:2097052 dw:2097052 dr:0 al:0 bm:128 lo:0 pe:0 ua:0 ap:0 oos:0
It's important to note that both servers are up to date (UpToDate/UpToDate) and primary (Primary/Primary).
To learn what means all the status information, take a look at: www.drbd.org/users-guide-emb/ch-admin.html#s-proc-drbd
We need to make the DRBD service start automaticaly at boot:
chkconfig --level 35 drbd on
5. Configuring GFS
Now, we must configure GFS (Red Hat Global File System), that it's a cluster filesystem to use with DRBD:
First, we need to configure /etc/cluster/cluster.conf on both nodes:
vi /etc/cluster/cluster.conf
<?xml version="1.0\"?> <cluster name="cluster1" config_version="3"> <cman two_node="1" expected_votes="1"/> <clusternodes> <clusternode name="node1.clustersmb.int" votes="1" nodeid="1"> <fence> <method name="single"> <device name="manual" ipaddr="10.0.0.181"/> </method> </fence> </clusternode> <clusternode name="node2.clustersmb.int" votes="1" nodeid="2"> <fence> <method name="single"> <device name="manual" ipaddr="10.0.0.182"/> </method> </fence> </clusternode> </clusternodes> <fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/> <fencedevices> <fencedevice name="manual" agent="fence_manual"/> </fencedevices> </cluster>
Next, we need to start cman service, on both nodes (at same time):
/etc/init.d/cman start
Afterwards, we can start the other services, on both nodes:
/etc/init.d/clvmd start
/etc/init.d/gfs start
/etc/init.d/gfs2 start
We need to ensure that all the services are enabled at the initialization.
On both nodes, do:
chkconfig --level 35 cman on
chkconfig --level 35 clvmd on
chkconfig --level 35 gfs on
chkconfig --level 35 gfs2 on
Next, format the device, only on one node:
gfs_mkfs -p lock_dlm -t cluster1:gfs -j 2 /dev/drbd0
Now, we create the mountpoint and mount the drbd device, on both nodes:
mkdir /clusterdata
mount -t gfs /dev/drbd0 /clusterdata
Let's insert the device on fstab, on both nodes:
vi /etc/fstab
Insert the following line:
/dev/drbd0 /clusterdata gfs defaults 0 0
Next, it's good to check that the clusterfs is working:
Only on node1, do:
tar -zcvf /clusterdata/backup-test.tgz /etc/
Now, we check if the file exists on node2.
Only on node2, do:
ls -l /clusterdata/
[root@node2 ~]# ls -l /clusterdata
total 12576
-rw-r--r-- 1 root root 12844520 Jul 23 16:01 backup-test.tgz
Now, let's test if node2 can write.
Only on node2, do:
tar -zcvf /clusterdata/backup-test2.tgz /etc/
Now, we check if the two files exists on node1.
Only on node1, do:
ls -l /clusterdata/
[root@no2 ~]# ls -l /clusterdata/
total 25160
-rw-r--r-- 1 root root 12850665 Jul 23 16:03 backup-test2.tgz
-rw-r--r-- 1 root root 12844520 Jul 23 16:01 backup-test.tgz
If everything is fine, of course we can delete the test files.
Only one node, do:
rm -f /clusterdata/backup*