Openfiler 2.99 Active/Passive With Corosync, Pacemaker And DRBD

Openfiler is a Linux based NAS/SAN Application which can deliver storage over nfs/smb/iscsi and ftp. It has a web interface over that you can control these services. This howto is based on the latest version of openfiler at this date, you can download it from the official homepage www.openfiler.com.

Thanks to the Openfiler team that made this howto possible.

 

1. Create Systems with following setup:

  • hostname: filer01
  • eth0: 10.10.11.101
  • eth1: 10.10.50.101
  • 500MB Meta partition
  • 4GB+ Data partition

 

  • hostname: filer02
  • eth0: 10.10.11.102
  • eth1: 10.10.50.102
  • 500MB Meta partition
  • 4GB+ Data partition

virtualip: 10.10.11.105 ( don't use on any adapter, we will make this later with corosync )

 

1.1 Create hosts file for easier access

root@filer01 ~# nano /etc/hosts

Add:

10.10.50.102	filer02

root@filer01 ~# nano /etc/hosts

On filer02 add:

10.10.50.101	filer01

 

1.2 Create/Exchange SSH Keys for easier file exchange

root@filer01 ~# ssh-keygen -t dsa

Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:

Do the same on filer02.

root@filer02 ~# ssh-keygen -t dsa

Then exchange the files:

root@filer01 ~# scp ~/.ssh/id_dsa.pub root@filer02:~/.ssh/authorized_keys

root@filer02 ~# scp ~/.ssh/id_dsa.pub root@filer01:~/.ssh/authorized_keys

And now you can exchange files between the nodes without entering a password.

 

2. Create meta/data Partition on both filers

Before we can actually start the cluster we have to prepaire both systems and let the data and meta partition sync before it can be used by corosync/pacemaker as the first cluster config will start drbd and take over the control of this service. So we prepaire our partitions this time before we do the actual cluster configuration as we did in openfiler 2.3.

 

2.1 Create DRBD Setup

Edit /etc/drbd.conf on filer01 and filer02:

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
#include "drbd.d/*.res";
resource meta {
 on filer01 {
  device /dev/drbd0;
  disk /dev/sdb1;
  address 10.10.50.101:7788;
  meta-disk internal;
 }
 on filer02 {
  device /dev/drbd0;
  disk /dev/sdb1;
  address 10.10.50.102:7788;
  meta-disk internal;
 }
}
resource data {
 on filer01 {
  device /dev/drbd1;
  disk /dev/sdb2;
  address 10.10.50.101:7789;
  meta-disk internal;
 }
 on filer02 {
  device /dev/drbd1;
  disk /dev/sdb2;
  address 10.10.50.102:7789;
  meta-disk internal;
 }
}

After that create the meta-data on it, if you get errors when this happens, please empty out the filesystem with, if you have anything in /etc/fstab related to the partitions /meta then remove these lines. ( This happens when you create the meta partitions in the installation phase ).

dd if=/dev/zero of=/dev/drbdX

root@filer01 ~# drbdadm create-md meta
root@filer01 ~# drbdadm create-md data

root@filer02 ~# drbdadm create-md meta
root@filer02 ~# drbdadm create-md data

Now you can start up drbd with:

service drbd start

on both nodes.

Make one node primary:

root@filer01 ~# drbdsetup /dev/drbd0 primary -o
root@filer01 ~# drbdsetup /dev/drbd1 primary -o

 

2.2 Prepare the Configuration Partition

root@filer01 ~# mkfs.ext3 /dev/drbd0

root@filer01 ~# service openfiler stop

 

2.2.1 Openfiler to meta-Partition

root@filer01 ~# mkdir /meta
root@filer01 ~# mount /dev/drbd0 /meta
root@filer01 ~# mv /opt/openfiler/ /opt/openfiler.local
root@filer01 ~# mkdir /meta/opt
root@filer01 ~# cp -a /opt/openfiler.local /meta/opt/openfiler
root@filer01 ~# ln -s /meta/opt/openfiler /opt/openfiler
root@filer01 ~# rm /meta/opt/openfiler/sbin/openfiler
root@filer01 ~# ln -s /usr/sbin/httpd /meta/opt/openfiler/sbin/openfiler
root@filer01 ~# rm /meta/opt/openfiler/etc/rsync.xml
root@filer01 ~# ln -s /opt/openfiler.local/etc/rsync.xml /meta/opt/openfiler/etc/
root@filer01 ~# mkdir -p /meta/etc/httpd/conf.d

 

2.2.2 Samba/NFS/ISCSI/PROFTPD Configuration Files to Meta Partition

root@filer01 ~# service nfslock stop
root@filer01 ~# umount -a -t rpc-pipefs
root@filer01 ~# mkdir /meta/etc
root@filer01 ~# mv /etc/samba/ /meta/etc/
root@filer01 ~# ln -s /meta/etc/samba/ /etc/samba
root@filer01 ~# mkdir -p /meta/var/spool
root@filer01 ~# mv /var/spool/samba/ /meta/var/spool/
root@filer01 ~# ln -s /meta/var/spool/samba/ /var/spool/samba
root@filer01 ~# mkdir -p /meta/var/lib
root@filer01 ~# mv /var/lib/nfs/ /meta/var/lib/
root@filer01 ~# ln -s /meta/var/lib/nfs/ /var/lib/nfs
root@filer01 ~# mv /etc/exports /meta/etc/
root@filer01 ~# ln -s /meta/etc/exports /etc/exports
root@filer01 ~# mv /etc/ietd.conf /meta/etc/
root@filer01 ~# ln -s /meta/etc/ietd.conf /etc/ietd.conf
root@filer01 ~# mv /etc/initiators.allow /meta/etc/
root@filer01 ~# ln -s /meta/etc/initiators.allow /etc/initiators.allow
root@filer01 ~# mv /etc/initiators.deny /meta/etc/
root@filer01 ~# ln -s /meta/etc/initiators.deny /etc/initiators.deny
root@filer01 ~# mv /etc/proftpd /meta/etc/
root@filer01 ~# ln -s /meta/etc/proftpd/ /etc/proftpd

 

2.2.3 httpd Modules for Openfiler

root@filer01 ~# rm /opt/openfiler/etc/httpd/modules
root@filer01 ~# ln -s /usr/lib64/httpd/modules /opt/openfiler/etc/httpd/modules

Now do a start and see if Openfiler can run:

root@filer01 ~# service openfiler start

 

2.2.4 filer02 Openfiler Configuration

root@filer02 ~# service openfiler stop
root@filer02 ~# mkdir /meta
root@filer02 ~# mv /opt/openfiler/ /opt/openfiler.local
root@filer02 ~# ln -s /meta/opt/openfiler /opt/openfiler

 

2.2.5 Samba/NFS/ISCSI/ProFTPD Configuration Files to Meta Partition

root@filer02 ~# service nfslock stop
root@filer02 ~# umount -a -t rpc-pipefs
root@filer02 ~# rm -rf /etc/samba/
root@filer02 ~# ln -s /meta/etc/samba/ /etc/samba
root@filer02 ~# rm -rf /var/spool/samba/
root@filer02 ~# ln -s /meta/var/spool/samba/ /var/spool/samba
root@filer02 ~# rm -rf /var/lib/nfs/
root@filer02 ~# ln -s /meta/var/lib/nfs/ /var/lib/nfs
root@filer02 ~# rm -rf /etc/exports
root@filer02 ~# ln -s /meta/etc/exports /etc/exports
root@filer02 ~# rm /etc/ietd.conf
root@filer02 ~# ln -s /meta/etc/ietd.conf /etc/ietd.conf
root@filer02 ~# rm /etc/initiators.allow
root@filer02 ~# ln -s /meta/etc/initiators.allow /etc/initiators.allow
root@filer02 ~# rm /etc/initiators.deny
root@filer02 ~# ln -s /meta/etc/initiators.deny /etc/initiators.deny
root@filer02 ~# rm -rf /etc/proftpd
root@filer02 ~# ln -s /meta/etc/proftpd/ /etc/proftpd

 

2.3 Prepare the Data Partition

Change the lvm filter in the

/etc/lvm/lvm.conf

file from:

filter = [ "a/.*/" ]

to

filter = [ "a|drbd[0-9]|", "r|.*|" ]

Exchange this file to the other filer node

root@filer01 ~# scp /etc/lvm/lvm.conf root@filer02:/etc/lvm/lvm.conf

After that we can create the actual used stuff:

root@filer01 ~# pvcreate /dev/drbd1
root@filer01 ~# vgcreate data /dev/drbd1
root@filer01 ~# lvcreate -L 400M -n filer data

 

3. Start Corosync and create a configuration for it:

3.1 Create Corosync authkey

root@filer01~# corosync-keygen

( Press the real keyboard instead of pressing keys in an ssh terminal. )

Copy the authkey file to the other node and change the fileaccess:

root@filer01~# scp /etc/corosync/authkey root@filer02:/etc/corosync/authkey
root@filer02~# chmod 400 /etc/corosync/authkey

 

3.2 Create a file named pcmk /etc/corosync/service.d/pcmk

root@filer01~# vi /etc/corosync/service.d/pcmk

service {
        # Load the Pacemaker Cluster Resource Manager
        name: pacemaker
        ver:  0
 }

 

3.2.1 Exchange this file to the other node

root@filer01~# scp /etc/corosync/service.d/pcmk root@filer02:/etc/corosync/service.d/pcmk

 

3.3 Create the corosync.conf file and change it to present your lan net ( bindnetaddr )

root@filer01~# vi /etc/corosync/corosync.conf

# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 10.10.50.0
                mcastaddr: 226.94.1.1
                mcastport: 5405
                ttl: 1
        }
}
logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        to_syslog: yes
        logfile: /var/log/cluster/corosync.log
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}
amf {
        mode: disabled
		}

 

3.3.1 Exchange the file to the other node

root@filer01~# scp /etc/corosync/corosync.conf root@filer02:/etc/corosync/corosync.conf

Share this page:

25 Comment(s)

Add comment

Comments

From: at: 2011-05-07 16:15:01

This here is a good reason to change to FreeNAS. I did several years ago.
It's way easier to install and use.

Sam

From: at: 2011-05-09 09:52:36

Because its so easy to setup FreeNAS as High Available Solution? I doubt so ... but why dont you write a howto about it because i had a hardtime using freenas after all.

From: Anonymous at: 2012-03-03 01:44:01

I really am having a hard time considering FreeNAS as a SAN solution for HA iSCSI deployment.  Sorry, but I think you must be referring to some home media server or something.  This is all about grown up stuff - not toys.

From: Anonymous at: 2011-06-28 06:59:57

 I got following:

[root@filer02 ~]# crm configure show
node filer01 \
        attributes standby="on"
node filer02 \
        attributes standby="on"

 

 How to change to attributes standby="off" ?

From: Saiful at: 2011-12-21 11:19:41

You may edit those item:

 # crm configure

 crm(live)configure# edit filler01

From: Eugeny at: 2012-01-16 12:42:36

The command:

lvcreate -L 400M -n filer data

Why "-L 400M"? If this is data volume, why not to use all 4GB?

From: Anonymous at: 2012-05-30 17:37:14

This is just the initial start volume to get things going. It doesn't matter what size you put it. You can create any volume size later with the web interface after everything is completely setup.

From: Anonymous at: 2012-03-02 19:12:24

Can someone please tell me how to set up a 3rd offsite node in this scenario?

From: Sander at: 2012-08-22 07:48:09

Thank you for this info! It really helped me. I had still an issue with the failover part caused by a small bug in the iscsi package of openfiler. This thread helped me solving it: https://forums.openfiler.com/index.php?/topic/5739-lvm-cluster-resource-migration-issue/

From: Anonymous at: 2013-08-09 20:33:09

Wouldn't it make more sense to use drbdlinks instead of moving things into the meta directory manually?

From: at: 2011-06-24 10:49:51

nfs-lock should be nfslock

From: Anonymous at: 2013-02-28 02:05:05

I am going to start building this in my lab for HA storage on hyper v hosts. What ip do the clients use to connect to the iscsi target?

From: Anonymous at: 2011-06-27 07:52:48

fail to get "meta" back after take primary node down and then restart.

 [root@filer01 ~]# service drbd status

Every 2.0s: service drbd status                                                        Wed Jun 22 12:29:37 2011

drbd driver loaded OK; device status:
version: 8.3.10 (api:88/proto:86-96)
GIT-hash: 5c0b0469666682443d4785d90a2c603378f9017b build by phil@fat-tyre, 2011-01-28 12:17:35
m:res   cs            ro                 ds                 p       mounted  fstype
0:meta  Unconfigured
1:data  StandAlone    Secondary/Unknown  UpToDate/DUnknown  r-----



[root@filer01 ~]# service drbd restart

Restarting all DRBD resources: 0: Failure: (104) Can not open backing device.
Command '/sbin/drbdsetup 0 disk /dev/sda3 /dev/sda3 internal --set-defaults --cr                               eate-device --on-io-error=detach' terminated with exit code 10


[root@filer01 ~]# fdisk -l


Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000ba6a0

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63      257039      128488+  83  Linux
/dev/sda2          257040     4450004     2096482+  82  Linux swap / Solaris
/dev/sda3         4450005     5494229      522112+  83  Linux
/dev/sda4         5494230    16771859     5638815   83  Linux

Disk /dev/sdb: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e074f

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63   209712509   104856223+  8e  Linux LVM

[root@filer01 ~]# crm_mon --one-shot -V
crm_mon[6797]: 2011/06/22_12:34:07 ERROR: native_add_running: Resource ocf::Filesystem:MetaFS appears to be active on 2 nodes.
crm_mon[6797]: 2011/06/22_12:34:07 WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
============
Last updated: Wed Jun 22 12:34:07 2011
Stack: openais
Current DC: filer02 - partition with quorum
Version: 1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b
2 Nodes configured, 2 expected votes
2 Resources configured.
============

Online: [ filer01 filer02 ]

 Resource Group: g_services
     MetaFS     (ocf::heartbeat:Filesystem) Started [   filer02 filer01 ]
     lvmdata    (ocf::heartbeat:LVM):   Started filer02 (unmanaged) FAILED
     openfiler  (lsb:openfiler):        Stopped
     ClusterIP  (ocf::heartbeat:IPaddr2):       Stopped
     iscsi      (lsb:iscsi-target):     Stopped
     samba      (lsb:smb):      Stopped
     nfs        (lsb:nfs):      Stopped
 Master/Slave Set: ms_g_drbd
     Masters: [ filer02 ]
     Stopped: [ g_drbd:0 ]

Failed actions:
    lvmdata_stop_0 (node=filer02, call=46, rc=1, status=complete): unknown error
    drbd_meta:0_start_0 (node=filer01, call=12, rc=-2, status=Timed Out): unknown exec error
 

From: at: 2011-07-20 21:12:04

Did you perhaps forget to create the /meta directory on one of the filers ?

From: at: 2012-09-14 20:13:54

Soy de Argentina.

Lo que hay que hacer es desmontar /meta o en mi caso /dev/sda3, ejecutar el comando para crear meta y reiniciar drbd

Comando:

# umount /dev/sda3

# drbdadm create-md meta

# service drbd restart

con eso ya deberia funcionar.

 

From: Anonymous at: 2011-06-28 00:16:58

some warning messages I received, any one have the same issue?


WARNING: MetaFS: default timeout 20s for start is smaller than the advised 60
WARNING: MetaFS: default timeout 20s for stop is smaller than the advised 60
WARNING: lvmdata: default timeout 20s for start is smaller than the advised 30
WARNING: lvmdata: default timeout 20s for stop is smaller than the advised 30
WARNING: drbd_meta: default timeout 20s for start is smaller than the advised 240
WARNING: drbd_meta: default timeout 20s for stop is smaller than the advised 100
WARNING: drbd_data: default timeout 20s for start is smaller than the advised 240
WARNING: drbd_data: default timeout 20s for stop is smaller than the advised 100

root@filer01 ~# crm configure verify

From: Anonymous at: 2012-04-25 18:20:05

I have the same problem. I'm not sure where we set the default timeout of 20 seconds though.

From: at: 2011-07-21 22:46:57

I think in following line is a typo:

crm(live)configure# order o_g_servicesafter_g_drbd inf: ms_g_drbd:promote g_services:start

should be:

crm(live)configure# order o_g_services_after_g_drbd inf: ms_g_drbd:promote g_services:start

(note the underscore between services_after)

From: hale at: 2011-08-14 15:31:22

Great tutorial I am in the process of configuring it for a production server.

 Have you used the DRBD management consol it looks very easy to use but I have no idea how to setup replication

From: at: 2011-08-27 13:35:42

Great Tutorial

 

I just finished building this in a test environment and i can see it working, couple of questions 

 

on the main openfiler server, can you still use the web interface and is can openfiler be used on the second server ?

From: Elias Chatzigeorgiou at: 2012-01-05 01:46:33

Great tutorial, thanks! A few questions below:

------------------------------------------------------------------------------------
a) How do I know, which node of the OF cluster is currently active?
For example, I use openfiler to provide iSCSI targets to clients.
How can I check if a node is currently in use by the iSCSI-target daemon?

I can try to deactivate a volume group using:

[root@openfiler1 ~]# vgchange -an data
  Can't deactivate volume group "data" with 3 open logical volume(s)		

In which case, if I get a message like the above then I know that
openfiler1 is the active node, but is there a better (non-intrusive)
way to check?

A better option seems to be 'pvs -v'. If the node is active then it shows the volume names:
[root@openfiler1 ~]# pvs -v
    Scanning for physical volume names
  PV         VG      Fmt  Attr PSize   PFree DevSize PV UUID
  /dev/drbd1 data    lvm2 a-   109.99g    0  110.00g c40m9K-tNk8-vTVz-tKix-UGyu-gYXa-gnKYoJ
  /dev/drbd2 tempdb  lvm2 a-    58.00g    0   58.00g 4CTq7I-yxAy-TZbY-TFxa-3alW-f97X-UDlGNP
  /dev/drbd3 distrib lvm2 a-    99.99g    0  100.00g l0DqWG-dR7s-XD2M-3Oek-bAft-d981-UuLReC

where on the inactive node it gives errors:
[root@openfiler2 ~]# pvs -v
    Scanning for physical volume names
  /dev/drbd0: open failed: Wrong medium type
  /dev/drbd1: open failed: Wrong medium type

Any further ideas/comments/suggestions?

------------------------------------------------------------------------------------

b) how can I gracefully failover to the other node ? Up to now, the only way I
know is forcing the active node to reboot (by entering two subsequent 'reboot'
commands). This however breaks the DRBD synchronization, and I need to
use a fix-split-brain procedure to bring back the DRBD in sync.

On the other hand, if I try to stop the corosync service on the active node,
the command takes forever! I understand that the suggested procedure should be
to disconnect all clients from the active node and then stop services,
is it a better approach to shut down the public network interface before
stopping the corosync service (in order to forcibly close client connections)?

Thanks

From: Anonymous at: 2012-01-18 10:13:13

It should be vgchange -a n data to deactivate a volume.

You can use the following to quickly switch:
crm node standby ; stops the services on FILER01 to test failover to FILER02
crm node online ; to bring the services back online on FILER01

From: Jera at: 2012-05-15 10:05:43

Hello 

 The tutorial is great and I tried to follow step by step but I didn't get it to work.

 running crm_mon I get the following output:

 Attempting connection to the cluster...

============

Last updated: Tue May 15 10:18:32 2012

Stack: openais

Current DC: cluster1 - partition with quorum

Version: 1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Online: [ cluster2 cluster1 ]

 Resource Group: g_services

lvmdata (ocf::heartbeat:LVM):   Started cluster2

openfiler (lsb:openfiler): Stopped

ClusterIP (ocf::heartbeat:IPaddr2): Stopped

iscsi (lsb:iscsi-target):     Stopped

samba (lsb:smb): Stopped

nfs (lsb:nfs): Stopped

nfslock (lsb:nfslock):   Stopped


 Master/Slave Set: ms_g_drbd

Masters: [ cluster2 ]

Slaves: [ cluster1 ]


Failed actions:

    nfs-lock_start_0 (node=cluster1, call=16, rc=1, status=complete): unknown e

 

 I misspelled the corosync command:

 order o_g_servicesafter_g_drbd inf: ms_g_drbd:promote g_services:start

 I've read in one of the comments that it was supposed to be:

 order o_g_services_after_g_drbd inf: ms_g_drbd:promote g_services:start

how do I correct it?

here is the output of the crm configure show command:

 node cluster1

node cluster2

primitive ClusterIP ocf:heartbeat:IPaddr2 \

params ip="128.1.8.101" cidr_netmask="32" \

op monitor interval="30s"

primitive MetaFS ocf:heartbeat:Filesystem \

params device="/dev/drbd0" directory="/meta" fstype="ext3"

primitive drbd_data ocf:linbit:drbd \

params drbd_resource="data" \

op monitor interval="15s"

primitive drbd_meta ocf:linbit:drbd \

params drbd_resource="meta" \

op monitor interval="15s"

primitive iscsi lsb:iscsi-target

primitive lvmdata ocf:heartbeat:LVM \

params volgrpname="data"

primitive nfs lsb:nfs

primitive nfs-lock lsb:nfslock

primitive nfslock lsb:nfslock

primitive openfiler lsb:openfiler

primitive samba lsb:smb

group g_drbd drbd_meta drbd_data

group g_services lvmdata openfiler ClusterIP iscsi samba nfs nfslock

ms ms_g_drbd g_drbd \

meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

colocation c_g_services_on_g_drbd inf: g_services ms_g_drbd:Master

order o_g_servicesafter_g_drbd inf: ms_g_drbd:promote g_services:start

property $id="cib-bootstrap-options" \

dc-version="1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false" \

no-quorum-policy="ignore"

rsc_defaults $id="rsc-options" \

resource-stickiness="100"

 

also notice that the first two lines of the crm configure show are different from the expected output, I get:

node cluster1

node cluster2

instead of:

 node filer01 \

attributes standby="off"

node filer02 \
attributes standby="off"

Also I can't access the openfiler web interface.  

Thank you in advance

 Jera

From: bpatel at: 2014-01-03 17:11:05

I'm also having the same issue. Where you able to resolve it?

From: at: 2012-09-25 20:33:35

Hello,

When I am trying to configure corosync, I get to the following step in the guide and then run into an error:

crm(live)configure# group g_services MetaFS lvmdata openfiler ClusterIP iscsi samba nfs nfslock
ERROR: object lvmdata does not exist

Up until this point, I have had absolutely no problems with anything. DRBD is running great, but I can not put these systems in production without the clustering/HA.

The main difference between my setup and this guide is that I have two separate data stores: one will be an iSCSI target ("vm_store"), the other will be an NFS share ("nfs_data"). So I've simply run every command relating to "data" twice - once for each of my data stores (with my DRBD resource names, etc., in place of the guide's, of course). I don't think that should have any effect that would produce this error.

I am running Openfiler 2.99.2 with all the latest packages (`conary updateall'). Please let me know if there is any more information I should provide, and thank you in advance for your help!