Openfiler 2.99 Active/Passive With Corosync, Pacemaker And DRBD - Page 2
4. Prepare everything for the first corosync start
First we are preparing our nodes for a restart for this we disable some services which are handled by corosync at a later point.
root@filer01~# chkconfig --level 2345 openfiler off
root@filer01~# chkconfig --level 2345 nfs-lock off
root@filer01~# chkconfig --level 2345 corosync on
Do the same on the other node:
root@filer02~# chkconfig --level 2345 openfiler off
root@filer02~# chkconfig --level 2345 nfs-lock off
root@filer02~# chkconfig --level 2345 corosync on
Now restart both nodes and check if corosync runs properly in the next part, dont enable drbd as this will be handled by corosync.
4.1 Check if corosync started properly
root@filer01~# ps auxf
root@filer01~# ps auxf
root 3480 0.0 0.8 534456 4112 ? Ssl 19:15 0:00 corosync
root 3486 0.0 0.5 68172 2776 ? S 19:15 0:00 \_ /usr/lib64/heartbeat/stonith
106 3487 0.0 1.0 67684 4956 ? S 19:15 0:00 \_ /usr/lib64/heartbeat/cib
root 3488 0.0 0.4 70828 2196 ? S 19:15 0:00 \_ /usr/lib64/heartbeat/lrmd
106 3489 0.0 0.6 68536 3096 ? S 19:15 0:00 \_ /usr/lib64/heartbeat/attrd
106 3490 0.0 0.6 69064 3420 ? S 19:15 0:00 \_ /usr/lib64/heartbeat/pengine
106 3491 0.0 0.7 76764 3488 ? S 19:15 0:00 \_ /usr/lib64/heartbeat/crmd
root@filer02~# crm_mon --one-shot -V
crm_mon[3602]: 2011/03/24_19:32:07 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
crm_mon[3602]: 2011/03/24_19:32:07 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
crm_mon[3602]: 2011/03/24_19:32:07 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
============
Last updated: Thu Mar 24 19:32:07 2011
Stack: openais
Current DC: filer01 - partition with quorum
Version: 1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b
2 Nodes configured, 2 expected votes
0 Resources configured.
============
Online: [ filer01 filer02 ]
4.2 Configure Corosync as following
Now before do monitor the status of starting the cluster on filer02:
root@filer02~# crm_mon
4.2.1 Howto configure corosync step by step
root@filer01~# crm configure
crm(live)configure# property stonith-enabled="false"
crm(live)configure# property no-quorum-policy="ignore"
crm(live)configure# rsc_defaults $id="rsc-options" \
> resource-stickiness="100"
crm(live)configure# primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="10.10.11.105" cidr_netmask="32" \
op monitor interval="30s"
crm(live)configure# primitive MetaFS ocf:heartbeat:Filesystem \
> params device="/dev/drbd0" directory="/meta" fstype="ext3"
crm(live)configure# primitive lvmdata ocf:heartbeat:LVM \
> params volgrpname="data"
crm(live)configure# primitive drbd_meta ocf:linbit:drbd \
> params drbd_resource="meta" \
> op monitor interval="15s"
crm(live)configure# primitive drbd_data ocf:linbit:drbd \
> params drbd_resource="data" \
> op monitor interval="15s"
crm(live)configure# primitive openfiler lsb:openfiler
crm(live)configure# primitive iscsi lsb:iscsi-target
crm(live)configure# primitive samba lsb:smb
crm(live)configure# primitive nfs lsb:nfs
crm(live)configure# primitive nfs-lock lsb:nfs-lock
crm(live)configure# group g_drbd drbd_meta drbd_data
crm(live)configure# group g_services MetaFS lvmdata openfiler ClusterIP iscsi samba nfs nfs-lock
crm(live)configure# ms ms_g_drbd g_drbd \
> meta master-max="1" master-node-max="1" \
> clone-max="2" clone-node-max="1" \
> notify="true"
crm(live)configure# colocation c_g_services_on_g_drbd inf: g_services ms_g_drbd:Master
crm(live)configure# order o_g_servicesafter_g_drbd inf: ms_g_drbd:promote g_services:start
crm(live)configure# commit
Watch now on the monitor process how the resources all start hopefully.
root@filer01 ~# crm_mon
4.2.2 Troubleshooting
If you get any errors because you done commit before the end of the config, then you need to do a cleanup, as in this example:
root@filer01~# crm
crm(live)resource cleanup MetaFS
4.2.3 Verify the config
To verify the config:
root@filer01~#crm configure show
node filer01 \
attributes standby="off"
node filer02 \
attributes standby="off"
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="10.10.11.105" cidr_netmask="32" \
op monitor interval="30s"
primitive MetaFS ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/meta" fstype="ext3"
primitive drbd_data ocf:linbit:drbd \
params drbd_resource="data" \
op monitor interval="15s"
primitive drbd_meta ocf:linbit:drbd \
params drbd_resource="meta" \
op monitor interval="15s"
primitive lvmdata ocf:heartbeat:LVM \
params volgrpname="data"
primitive openfiler lsb:openfiler
primitive iscsi lsb:iscsi-target
primitive samba lsb:smb
primitive nfs lsb:nfs
primitive nfs-lock lsb:nfs-lock
group g_drbd drbd_meta drbd_data
group g_services MetaFS lvmdata openfiler ClusterIP iscsi samba nfs nfs-lock
ms ms_g_drbd g_drbd \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation c_g_services_on_g_drbd inf: g_services ms_g_drbd:Master
order o_g_services_after_g_drbd inf: ms_g_drbd:promote g_services:start
property $id="cib-bootstrap-options" \
dc-version="1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
last-lrm-refresh="1301801257"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
5. Specify your Setup
Contrary to openfiler 2.3 where you had to manually exchange the haresource file after each change to the services, the config gets exchanged here on whichever node you change it, furthermore you can just modify your setup and remove services from the above setup as it starts all services used by openfiler, you can just start the one you use in the end.