Installing and Configuring Openfiler with DRBD and Heartbeat - Page 2

Configure LVM Partition

Create the /dev/drbd1 as a PV (Physical Volume) for the data volume group, which will be used to create Logical Volumes for data.

First, edit /etc/lvm/lvm.conf and modify the filter line:

From:

filter = [ "a/.*/" ]

To:

filter = [ "r|/dev/sda5|" ]

Note: Change /dev/sda5 to reflect the partition of your LVM. Also remember to apply these changes on both filer01 and filer02.

Create the LVM Physical Volume (only do this on our Primary node, as it will replicate to the Second node via drbd):

  [email protected] /# pvcreate /dev/drbd1
  Physical volume "/dev/drbd1" successfully created

 

Configure Heartbeat

As mentioned before, Heartbeat controls failover between hosts. The two nodes run the Heartbeat service, that sends out a heartbeat pulse on the secondary interface (eth1). If one node dies, then Heartbeat detects this and roles the surviving node to Primary (if it wasn't already) using startup scripts available in /etc/ha.d/resources.d.

Make modifications to /etc/ha.d/ha.cf and /etc/ha.d/authkeys. Make these changes on both nodes.

In /etc/ha.d/authkeys, add:

auth 2
2 crc

The /etc/ha.d/authkeys file may not exist and will need to be created, as it does not appear to exist in Openfiler 2.3.

Next, restrict permissions to authkeys to just "root":

[email protected] ~# chmod 600 /etc/ha.d/authkeys
[email protected] ~# chmod 600 /etc/ha.d/authkeys

Create a /etc/ha.d/ha.cf on both nodes (needs to be identical on both, just like /etc/drbd.conf):

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
bcast eth1
keepalive 5
warntime 10
deadtime 120
initdead 120
udpport 694
auto_failback off
node filer01
node filer02

Enable Heartbeat to startup at boot:

[email protected] ~# chkconfig --level 2345 heartbeat on
[email protected] ~# chkconfig --level 2345 heartbeat on

 

Openfiler Data Configuration

As mentioned above, a 512 MB partition was created to keep the configuration and HA services available during a failover. To get this working, copy the services and Openfiler configuration data over to the new partition - symbolically linking it back to it's original location.

filer01:

[email protected] ~# mkdir /cluster_metadata
[email protected] ~# mount /dev/drbd0 /cluster_metadata
[email protected] ~# mv /opt/openfiler/ /opt/openfiler.local
[email protected] ~# mkdir /cluster_metadata/opt
[email protected] ~# cp -a /opt/openfiler.local /cluster_metadata/opt/openfiler
[email protected] ~# ln -s /cluster_metadata/opt/openfiler /opt/openfiler
[email protected] ~# rm /cluster_metadata/opt/openfiler/sbin/openfiler
[email protected] ~# ln -s /usr/sbin/httpd /cluster_metadata/opt/openfiler/sbin/openfiler
[email protected] ~# rm /cluster_metadata/opt/openfiler/etc/rsync.xml
[email protected] ~# ln -s /opt/openfiler.local/etc/rsync.xml /cluster_metadata/opt/openfiler/etc/

Then edit our /opt/openfiler.local/etc/rsync.xml file:

<?xml version="1.0" ?>
<rsync>
<remote hostname="10.188.188.2"/> ## IP address of peer node.
<item path="/etc/ha.d/haresources"/>
<item path="/etc/ha.d/ha.cf"/>
<item path="/etc/ldap.conf"/>
<item path="/etc/openldap/ldap.conf"/>
<item path="/etc/ldap.secret"/>
<item path="/etc/nsswitch.conf"/>
<item path="/etc/krb5.conf"/>
</rsync>
  [email protected] ~# mkdir -p /cluster_metadata/etc/httpd/conf.d

filer02:

[email protected] ~# mkdir /cluster_metadata
[email protected] ~# mv /opt/openfiler/ /opt/openfiler.local
[email protected] ~# ln -s /cluster_metadata/opt/openfiler /opt/openfiler

Change the /opt/openfiler.local/etc/rsync.xml to reflect below:

<?xml version="1.0" ?>
<rsync>
<remote hostname="10.188.1881"/> ## IP address of peer node.
<item path="/etc/ha.d/haresources"/>
<item path="/etc/ha.d/ha.cf"/>
<item path="/etc/ldap.conf"/>
<item path="/etc/openldap/ldap.conf"/>
<item path="/etc/ldap.secret"/>
<item path="/etc/nsswitch.conf"/>
<item path="/etc/krb5.conf"/>
</rsync>

 

Heartbeat Cluster Configuration

Then modify the /cluster_metadata/opt/openfiler/etc/cluster.xml config file. This config file generates the /etc/ha.d/haresources file, which tells Heartbeat what it should do in a failover.

filer01 Only:

<?xml version="1.0" ?>
<cluster>
<clustering state="on" />
<nodename value="filer01" />
<resource value="MailTo::[email protected]::ClusterFailover"/>
<resource value="IPaddr::192.168.1.17/24" />
<resource value="drbddisk::">
<resource value="LVM::vg0drbd">
<resource value="Filesystem::/dev/drbd0::/cluster_metadata::ext3::defaults,noatime">
<resource value="MakeMounts"/>
</cluster>

Note how the HA IP address is declared here (192.168.1.17). As mentioned before, Heartbeat controls the setup of the network interface, the mounting of the LVM volume group, and the mounting of drbd0 (/cluster_metadata).

 

Samba and NFS Support

Modify Samba and NFS so it's available on our /cluster_metadata drbd resource.

filer01:

[email protected] ~# mkdir /cluster_metadata/etc
[email protected] ~# mv /etc/samba/ /cluster_metadata/etc/
[email protected] ~# ln -s /cluster_metadata/etc/samba/ /etc/samba
[email protected] ~# mkdir -p /cluster_metadata/var/spool
[email protected] ~# mv /var/spool/samba/ /cluster_metadata/var/spool/
[email protected] ~# ln -s /cluster_metadata/var/spool/samba/ /var/spool/samba
[email protected] ~# mkdir -p /cluster_metadata/var/lib
[email protected] ~# mv /var/lib/nfs/ /cluster_metadata/var/lib/
[email protected] ~# ln -s /cluster_metadata/var/lib/nfs/ /var/lib/nfs
[email protected] ~# mv /etc/exports /cluster_metadata/etc/
[email protected] ~# ln -s /cluster_metadata/etc/exports /etc/exports

Note: This moves /var/spool/samba into /cluster_metadata, which is only a 512 MB partition. So, if large print jobs are put through Samba, the free space on this volume will get eaten up pretty quickly. So, if this is the case, a separate DRBD resource should be created for the /var directory. Or, reconsider hosting print services on a different server.

filer02:

[email protected] ~# rm -rf /etc/samba/
[email protected] ~# ln -s /cluster_metadata/etc/samba/ /etc/samba
[email protected] ~# rm -rf /var/spool/samba/
[email protected] ~# ln -s /cluster_metadata/var/spool/samba/ /var/spool/samba
[email protected] ~# rm -rf /var/lib/nfs/
[email protected] ~# ln -s /cluster_metadata/var/lib/nfs/ /var/lib/nfs
[email protected] ~# rm -rf /etc/exports
[email protected] ~# ln -s /cluster_metadata/etc/exports /etc/exports

 

iSCSI Support

filer01:

[email protected] ~# mv /etc/ietd.conf /cluster_metadata/etc/
[email protected] ~# ln -s /cluster_metadata/etc/ietd.conf /etc/ietd.conf
[email protected] ~# mv /etc/initiators.allow /cluster_metadata/etc/
[email protected] ~# ln -s /cluster_metadata/etc/initiators.allow /etc/initiators.allow
[email protected] ~# mv /etc/initiators.deny /cluster_metadata/etc/
[email protected] ~# ln -s /cluster_metadata/etc/initiators.deny /etc/initiators.deny

filer02:

[email protected] ~# rm /etc/ietd.conf
[email protected] ~# ln -s /cluster_metadata/etc/ietd.conf /etc/ietd.conf
[email protected] ~# rm /etc/initiators.allow
[email protected] ~# ln -s /cluster_metadata/etc/initiators.allow /etc/initiators.allow
[email protected] ~# rm /etc/initiators.deny
[email protected] ~# ln -s /cluster_metadata/etc/initiators.deny /etc/initiators.deny

 

FTP Support

filer01:

[email protected] ~# mv /etc/proftpd /cluster_metadata/etc/
[email protected] ~# ln -s /cluster_metadata/etc/proftpd/ /etc/proftpd

filer02:

[email protected] ~# rm -rf /etc/proftpd
[email protected] ~# ln -s /cluster_metadata/etc/proftpd/ /etc/proftpd

 

Configure Volume Group

filer01:

Create a Volume group from /dev/drbd1:

  [email protected] etc# vgcreate vg0drbd /dev/drbd1
  Volume group "vg0drbd" successfully created

Note: If planning on using Windows to connect to these iSCSI targets, do not use the "_" character or any other special characters when creating the volume group.

Once the Heartbeat service has been configured and started (see below), the Openfiler web administration GUI should be available on https://192.168.1.17:446. Once there, LVM volumes can be created and to exported via iSCSI etc.

 

Starting Heartbeat and First-Time Configuration

In order to get Openfiler to write the /etc/ha.d/haresources file based on the cluster.xml config file, restart the Openfiler service and then log onto the web interface (using the Primary node's direct IP) and click on Services and enable iSCSI.

Make sure to do this on the Primary node (filer01).

[email protected] ~# rm /opt/openfiler/etc/httpd/modules
[email protected] ~# ln -s /usr/lib64/httpd/modules /opt/openfiler/etc/httpd/modules

Note: If you use a 32-bit system, just take out the “64”.

  [email protected] ~# service openfiler restart

With any luck, Openfiler has written this file out to /etc/ha.d/haresources. If haresources was created, copy it over to filer02.

Note: Before starting Heartbeat, a volume must be created:

  [email protected] ~# lvcreate -L 400M -n filer vg0drbd

It appears if you log onto the web interface and activate a service such as NFS or iSCSI. this will force Openfiler to rewrite the /etc/ha.d/haresources file. Copy (via scp) this file over to the second node as follows:

  [email protected] ~# scp /etc/ha.d/haresources [email protected]:/etc/ha.d/haresources

Since heartbeat was added to the start up scripts earlier, reboot filer01, then reboot filer02.

If all goes well, access the primary node via a web browser on the High Available ip address: https://192.168.1.17:446.

If the web server is not accessible, a good place to look for errors is in /var/log/ha-log or /var/log/ha-debug files.

Note: The rsync configuration is meant to synchronize changes between the two nodes, but this will not happen unless using the most recent build. Therefore, see the fix at the following page:

https://forums.openfiler.com/viewtopic.php?id=2380

Once Openfiler is up and running, delete the the filer volume created earlier and then create new volumes (be sure to create the new volumes before stopping the Heartbeat service, or it will not start).

Share this page:

Suggested articles

29 Comment(s)

Add comment

Comments

By: Anonymous

Great article!

I would also like to add that if you are using iSCSI, the iscsi-target service must be restarted on the passive node every time you add a new LUN. Otherwise the passive node won't see the new LUN and won't failover correctly.

By: Anonymous

udpport 694 must be before the bcast statement in the ha.cf. If you have more than one cluster setup, it will use the default 694 and you will get an error.

By: Anonymous

There is a typo in:

/opt/openfiler.local/etc/rsync.xml

<remote hostname="10.188.1881"/> ## IP address of peer node.
should be:
<remote hostname="10.188.188.1"/> ## IP address of peer node.

Also:

<resource value="IPaddr::192.168.1.17/24" />
would be better expressed as:
<resource value="IPaddr::192.168.1.17/32" />
Good article.

By: PatrickD

How do I set the HA IP address to a 3rd, physical NIC? Just add one and configure it with the desired IP?

By: Anonymous

I have tried this like 20 times and it just doesn't work.  The /cluster_metadata never gets replicated to the second node and if the server is rebooted, /cluster_metadata won't mount.

 I followed the steps pricesly and the only error I gets is when trying to move the nfs directory (permissions issues) but that shouldn't cause the drives to not mount or stop replication.

 My guess is that the version 3.2 of OF is the issue.

 Thanks.

By: Rip

Hi Gilly,

 Would prob be a good idea to throw up the credits to parts of the howto and some links on the openfiler forum that addresses some peoples issues.

 Sources: Eg. http://wiki.hyber.dk/doku.php and http://www.the-mesh.org

 

Cheers

Rip

By: vladdrac

openfiler doesn't create haresources any idea ?

By: Anonymous

Hi,

In order to make the haresources file, you need to login to Openfiler web interface and then you go to Services tab and you need to Enable some services. After that you will see haresources file in /etc/ha.d/

Pisey

By: kad

attempt update pakage

conary updateall on both node

By: thefusa

I get the same error....I've installed 20 times..same error..

 

By: Adam

First make sure all NFS services are halted and umount rpc_pipefs:

service nfslock stop

service nfs stop

service rpcidmapd stop

umount -a -t rpc_pipefs


...Continued from his guide...

mkdir -p /cluster_metadata/var/lib

mv /var/lib/nfs/ /cluster_metadata/var/lib/

ln -s /cluster_metadata/var/lib/nfs/ /var/lib/nfs


Then one more thing:

mount -t rpc_pipefs sunrpc /var/lib/rpc_pipefs

service rpcidmapd start

By:

Hi...

I configured my openfiler cluster with exactly like this article.

But when i reboot the both filer, i get this error on filer01:

Checking filesystems

/1: clean, 30108/767232 files, 170424/767095 blocks

fsck.ext3: Unable to resolve 'LABEL=/meta'

And I get this error on filer02:

/1: clean, 30108/767232 files, 170424/767095 blocks

fsck.ext3: Unable to resolve 'LABEL=/meta1'

Can anyone help me??? I'm just dispairing. THX

By: sgi

Maya,

 from the recovery console try:

 e2label /dev/xxx /meta

 (where xxx is your device mounted /meta)

 ~sgi

By: maya

Hi sgi

You are the best!

 Thank you very much. This resovled my problem!

 Maya

By: Didier

I'm having the same problem, and even though I add /dev/xxx /data still the problem persist.

This is my /etc/fstab file:

LABEL=/                 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext2    defaults        1 2
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
LABEL=/meta1            /meta                   ext3    defaults        1 2
/proc                   /proc                   proc    defaults        0 0
/sys                    /sys                    sysfs   defaults        0 0
LABEL=SWAP-hda3         swap                    swap    defaults        0 0

At the recovery console I type:

e2label /dev/hda5 /meta1

After that I reboot and the problem persist, then I try with the following

e2label /dev/hda5 /meta

After that I reboot and the problem persist.

If I comment the line in the fstab file the system boots fine but Openfiler does not start, complaining that it cannot find /opt/openfiler/etc/httpd/conf/httpd.conf: No such file or directory
/b in/bash: /opt/openfiler/sbin/openfiler: No such file or directory

This is a sample of my /etc/drbd.conf file
resource meta {
        device          /dev/drbd0;
        disk            /dev/hda5;

I have follow the procedure many times already, everything works fine until I reboot the systems.

Any help is appreciated

By: Anonymous

I had the same issue.  I replaced LABEL=/meta with the actual disk information. Ex; /dev/sdb5.  I also added noauto to the options.  My line reads:

 /dev/sdb5               /meta                   ext3    defaults,noauto 1 2

Things worked fine after that.

By: Anonymous

I did the same but once I reboot filer 1 and 2 the cluster_metadata /dev/sda5 is not mounted and I could not login to the HA IP address

 

Please help!

By: Anonymous

I got the same problems, tryed to changed the fstab 'LABEL=/meta' entry to point to the disk partition and all other tips in the comments and they did not work.


By: Ben

Just redo the boxes and after u partition the drives within the install, nano /etc/fstab and remove the /meta line form the file

By: santosh

A tutorial about how-to Configure Openfiler 2.99 as a iSCSI target for VMware vSphere 5 (ESXi 5). This will help all users/new newbie,

http://www.mytricks.in/2011/09/guide-installing-openfiler-on-vmware.html
http://www.mytricks.in/2011/09/guide-configure-openfiler-299-with.html
http://www.mytricks.in/2011/09/guide-configure-openfiler-299-with_08.html

By:

Some of the resource lines are missing the tag ending />

ie

<resource value="drbddisk::">
<resource value="LVM::vg0drbd">
<resource value="Filesystem::/dev/drbd0::/cluster_metadata::ext3::defaults,noatime">

should be

<resource value="drbddisk::" />
<resource value="LVM::vg0drbd" />
<resource value="Filesystem::/dev/drbd0::/cluster_metadata::ext3::defaults,noatime" />


The rsync code first parses cluster.xml to work out if clusting is enabled but falls over because the tags are not consistent (no errors are displayed on the openfiler config pages.)
Since it can't determine if clustering is enabled the rsync process never happens. This means all files listed in rsync.xml file never sync between nodes until the tags are fixed in cluster.xml

By:

the entry

udpport 694

should go before the entry

bcast eth1

This above config will work fine as it will always default to port 694 if the udpport entry is in the wrong order. For those who have two HA clusters (like me) you need to have a different port number for each cluster which means you need the udpport line before the bcast line.

By: Rombik

Help pls.,

After several minutes it is unmounted /cluster_metadata 

After restart of service  heartbeat on any node -
 is again mounted and again unmounted 


 cluster node also ping\no ping

(heartbeat and drbd service - started )

 for ex. 

[[email protected] ~]# service heartbeat status
heartbeat OK [pid 16249 et al] is running on filer01 [filer01]...

and node 2 - started too.

By: Anonymous

Hello,

 I've followed the instructions, and everything seems to be working well.  The problem I have is with configuring volumes after DRBD & heartbeat are configured.

 For my initial disk configurations I used /dev/sdb (local storage) and those failover properly.  However, I would now like to configure /dev/sda (Fiber RAID5 storage).  I set everything up in Openfiler, but when I perform a test failover (/usr/lib/heartbeat/hb_standby on the primary node) I don't see the volume I just created on the primary node.

Can someone explain the process a little bit more clearly with regards to setting up volumes in this environment?  Am I missing something?

Thanks,

  -Josh

By:

Hi Techies,

Nice Post I was also looking for mysql replication clustering , oracle clustering along with also implementation of DRBD,High Availability using Heartbeat  too and found a great ebook on http://www.ebooksyours.com/what-is-clustering.html and this ebook was a complete worthy purchasing as it consisted of complete thorough implementation of all clustering technologies with live examples and configurations.

 Cheers !

Akki

By:

In my previous post, I supplied the wrong cluster.xml file. Here is the correct cluster.xml file. Heartbeat will create two HA IP addresses, one on each NIC (i.e. eth0 and eth1).

 <?xml version="1.0" ?>
<cluster>
<clustering state="on" />
<nodename value="san01" />
<resource value="MailTo::[email protected]::ClusterFailover"/>
<resource value="IPaddr::192.168.0.6/24/eth0:0" />
<resource value="IPaddr::192.168.7.16/24/eth1:0" />
<resource value="drbddisk::">
<resource value="LVM::vg0drbd">
<resource value="Filesystem::/dev/drbd0::/cluster_metadata::ext3::defaults,noatime">
<resource value="MakeMounts"/>
</cluster>

By:

Hi, this is if for anyone who finds the below comments of interest.  First, thanks to all who created Openfiler, those who wrote this HOWTO, and also those who contributed ideas or questions here, and in the Openfiler forums.

After I had created a VMware ESX HA demonstration setup in a i7 CPU, whitebox ESXi server which used a single Openfiler iSCSI SAN, I wanted to test out the above HOWTO on setting up an Openfiler Cluster. My interest was in an iSCSI SAN Cluster. I built this cluster as two Openfiler guests in the i7 ESXi host server.  Because of this I am unable to accurately test performance. Soon I want to reproduce this with real hardware, not virtual.  The real question I am trying to solve is how to have an effective Openfiler SAN HA Cluster for VMware using only 1 GB NICs (sadly 10 GB NICs are currently too expensive for my budget).

Reading VMware documentation, it is my understanding that VMware does not use NIC  teaming to make 2, 1GB NICs into a single 2 GB bond (please tell me how, if I am wrong).  It is also my understanding that VMware's best and recommend approach is to create two VMKernels with one NIC each, then on your SAN with two NICs create two paths/networks to the SAN's iSCSI targets. When you have two (or more) paths in vSphere, you can then select "Round Robin (VMware)"for the iSCSI targets.  I have tested this and it worked well, giving good reliable data transfer, but not faster network speed, why?, well because each NIC/Path is still only 1 Gbps each, and are used in turn not together at the same time. When also used with two separate Gb switches one for each path/network, it does allow for increased fault tolerance by having a redundant path.

This configuration was possible once I learned that I could specify two HA IP addresses using "IPaddr::192.168.0.6/24/eth0:0 IPaddr::192.168.7.16/24/eth1:0 ".  Using the Cluster built from this HOWTO, I used "ifconfig" to display the IP settings which showed that the IP address for the HA IP Address was configured as eth0:0 which gave me the idea of using "192.168.0.6/24/eth0:0" in the Cluster.xml file and then I created a second IP address entry for the eth1 as "IPaddr::192.168.7.16/24/eth1:0".

The simplest way I could think of explaining how to configured this, is to show you a few of the important configuration files.

Two Openfiler servers called san01 and san02 (just to use a different name than filer0x). Each server has 4 NICs and one hard drive.
The hard disk drive partitioning that I used is slightly different to the above HOWTO:

  • 3072 MB root (“/”) partition
  • 3072 MB log (“/var/log”) partition
  • 2048 MB “swap” partition
  • 1024 MB “/meta” partition (used for DRBD0)
  • The remainder of the drive as an unmounted LVM (used for DRBD1)

[email protected] ~]# fdisk -l /dev/sda

Disk /dev/sda: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 392 3148708+ 83 Linux
/dev/sda2 393 784 3148740 83 Linux
/dev/sda3 785 1045 2096482+ 82 Linux swap / Solaris
/dev/sda4 1046 13054 96462292+ 5 Extended
/dev/sda5 1046 1176 1052226 83 Linux
/dev/sda6 1177 13054 95410003+ 8e Linux LVM
[[email protected] ~]#

The four NICs are configured as follows;
(bond0 is eth2 and eth3 bonded together)
High Availability IP address for eth0 is 192.168.0.6
High Availability IP address for eth1 is 192.168.7.16

 (text below is copied from ifconfig output);

san01
bond0     inet addr:192.168.5.1  Bcast:192.168.5.255  Mask:255.255.255.0
eth0      inet addr:192.168.0.1  Bcast:192.168.0.255  Mask:255.255.255.0
eth0:0    inet addr:192.168.0.6  Bcast:192.168.0.255  Mask:255.255.255.0
eth1      inet addr:192.168.7.11  Bcast:192.168.7.255  Mask:255.255.255.0
eth1:0    inet addr:192.168.7.16  Bcast:192.168.7.255  Mask:255.255.255.0
eth2      inet addr:192.168.5.1  Bcast:192.168.5.255  Mask:255.255.255.0
eth3      inet addr:192.168.5.1  Bcast:192.168.5.255  Mask:255.255.255.0

san02
bond0     inet addr:192.168.5.2  Bcast:192.168.5.255  Mask:255.255.255.0
eth0      inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
eth0:0    inet addr:192.168.0.6  Bcast:192.168.0.255  Mask:255.255.255.0
eth1      inet addr:192.168.7.12  Bcast:192.168.7.255  Mask:255.255.255.0
eth1:0    inet addr:192.168.7.16  Bcast:192.168.7.255  Mask:255.255.255.0
eth2      inet addr:192.168.5.2  Bcast:192.168.5.255  Mask:255.255.255.0
eth3      inet addr:192.168.5.2  Bcast:192.168.5.255  Mask:255.255.255.0

The /cluster_metadata/opt/openfiler/etc/cluster.xml file is;
<?xml version="1.0" ?>
<cluster>
<clustering state="on" />
<nodename value="filer01" />
<resource value="MailTo::[email protected]::ClusterFailover"/>
<resource value="IPaddr::192.168.0.6/24/eth0" />
<resource value="IPaddr::192.168.0.16/24/eth4" />
<resource value="drbddisk::">
<resource value="LVM::vg0drbd">
<resource value="Filesystem::/dev/drbd0::/cluster_metadata::ext3::defaults,noatime">
<resource value="MakeMounts"/>
</cluster>

Which created /etc/ha.d/haresources
san01 MailTo::[email protected]::ClusterFailover IPaddr::192.168.0.6/24/eth0:0 IPaddr::192.168.7.16/24/eth1:0 drbddisk:: LVM::vg0drbd Filesystem::/dev/drbd0::/cluster_metadata::ext3::defaults,noatime MakeMounts iscsi-target openfiler

This is the drbd status

[[email protected] ~]# service drbd status ; date
drbd driver loaded OK; device status:
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: 61b7f4c2fc34fe3d2acf7be6bcc1fc2684708a7d build by [email protected], 2008-11-12 16:47:11
m:res cs st ds p mounted fstype
0:cluster_metadata Connected Primary/Secondary UpToDate/UpToDate C /cluster_metadata ext3
1:vg0drbd Connected Primary/Secondary UpToDate/UpToDate C
Fri Feb 4 19:40:25 EST 2011
[[email protected] ~]#

 

I am curious if anyone has any further ideas or useful comments.

By: Anonymous

ESX will do nic teaming to make a single trunk.

1) get a smart switch (capable of link aggregation) and an ESX server with multiple nics.
2) in vSphere, go to the server configuration of the ESX box -> networking -> find your VMNetwork and click Properties.
3) Add the unassigned NIC's to the VM Network vswitch.
4) Select properties on the VSwitch -> setup your link aggregation there
5) (AFTER YOU HAVE DONE 4, NOT BEFORE) turn link aggregation on in your switch
6) ???
7) profit

By: Jon

If you're setting this up with more than one data partition/volume group, the LV created at the end of the tutorial must be made on each volume group or heartbeat will fail.  

 ex:

 [email protected] ~# lvcreate -L 400M -n StartVol_0 vg0drbd

 [email protected] ~# lvcreate -L 400M -n StartVol_1 vg1drbd

 [email protected] ~# lvcreate -L 400M -n StartVol_2 vg2drbd

 [email protected] ~# lvcreate -L 400M -n StartVol_3 vg3drbd