#1  
Old 27th April 2010, 05:51
dexjul dexjul is offline
Member
 
Join Date: May 2007
Posts: 53
Thanks: 0
Thanked 3 Times in 2 Posts
Default softwareraid + LVM + drbd

Hi All,

I am confused on the how to of drbd.

Im configure my disk in software raid and lvm

Here's my disk slice and same with the other server

######################

[root@postfix1 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
9.7G 1.1G 8.2G 12% /
/dev/md0 289M 23M 252M 9% /boot
/dev/mapper/VolGroup00-LogVol02
9.7G 151M 9.1G 2% /opt
/dev/mapper/VolGroup00-LogVol01
9.7G 224M 9.0G 3% /var
tmpfs 125M 0 125M 0% /dev/shm

######################


I installed drbd and I want to replicate the data on the other server.

Here' the drbr.conf of both server


######################

resource repdata {
protocol C;
on postfix1.server1.com {
device /dev/drbd0;
disk /dev/mapper/VolGroup00-LogVol00;
address 192.168.88.188:7789;
meta-disk internal;
}
on postfix2.server2.com {
device /dev/drbd0;
disk /dev/mapper/VolGroup00-LogVol00;
address 192.168.1.189:7789;
meta-disk internal;
}
}

#######################


When I started drbd I got an error.


[root@postfix1 /]# service drbd start
Starting DRBD resources: [ d(repdata) /dev/drbd0: Failure: (114) Lower device is already claimed. This usually means it is mounted.

[repdata] cmd /sbin/drbdsetup /dev/drbd0 disk /dev/mapper/VolGroup00-LogVol00 /dev/mapper/VolGroup00-LogVol00 internal --set-defaults --create-device failed - continuing!

]..........
************************************************** *************
DRBD's startup script waits for the peer node(s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 0 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource 'repdata'; 0 sec -> wait forever)
To abort waiting enter 'yes' [ 345]:

######################

[root@postfix1 /]# drbdadm create-md repdata
md_offset 10737414144
al_offset 10737381376
bm_offset 10737053696

Found ext3 filesystem which uses 10485760 kB
current configuration leaves usable 10485404 kB

Device size would be truncated, which
would corrupt data and result in
'access beyond end of device' errors.
You need to either
* use external meta data (recommended)
* shrink that filesystem first
* zero out the device (destroy the filesystem)
Operation refused.

Command 'drbdmeta /dev/drbd0 v08 /dev/mapper/VolGroup00-LogVol00 internal create-md' terminated with exit code 40

#####################


Any Idea what Im miissing?


Dexter
Reply With Quote
Sponsored Links
  #2  
Old 27th April 2010, 14:48
Mark_NL Mark_NL is offline
Senior Member
 
Join Date: Sep 2008
Location: The Netherlands
Posts: 912
Thanks: 12
Thanked 100 Times in 96 Posts
Default

You already mounted /dev/mapper/VolGroup00-LogVol00 as root (/)

that's not possible..
You should umount /dev/mapper/VolGroup00-LogVol00 and remount /dev/drbd0 as /

though if you you use drbd for you root device, i think you should be sure you have it in your initrd.img so you can boot from it .. still i think it's a strange setup. Let's say if your server crashes, the other will take over, now you're not able to (re)boot the first server anymore, because you can't have 2 primary nodes (it's possible, but you'll break everything)
Reply With Quote
  #3  
Old 5th May 2010, 08:21
dexjul dexjul is offline
Member
 
Join Date: May 2007
Posts: 53
Thanks: 0
Thanked 3 Times in 2 Posts
Default

Hi Mark,

I can now sucessfull installed the drdb and hearbeat.

drdb are working on 2 nodes but they are not auto failed over on the secondary node

I got this error [root@postfix1 /]# mount /dev/drbd1 /servermirror/
mount: block device /dev/drbd1 is write-protected, mounting read-only
mount: Wrong medium type

here's the drbr.conf

resource servermirror {
protocol C;
on postfix1.server1.com {
device /dev/drbd1;
disk /dev/VolGroup00/LogVol05;
address 192.168.88.188:7789;
meta-disk internal;
}
on postfix2.server2.com {
device /dev/drbd1;
disk /dev/VolGroup00/LogVol05;
address 192.168.88.189:7789;
meta-disk internal;
}
}

==================


haresource

postfix1.server1.com IPaddr::192.168.88.205 drbddisk::servermirror
postfix1.server1.com Filesystem::/dev/drbd1::/servermirror::ext3



Thanks

Dexter
Reply With Quote
  #4  
Old 6th May 2010, 09:32
Mark_NL Mark_NL is offline
Senior Member
 
Join Date: Sep 2008
Location: The Netherlands
Posts: 912
Thanks: 12
Thanked 100 Times in 96 Posts
 
Default

Quote:
I got this error [root@postfix1 /]# mount /dev/drbd1 /servermirror/
mount: block device /dev/drbd1 is write-protected, mounting read-only
mount: Wrong medium type
This will most likely be that your drbd disc is not made primary

do:
Code:
cat /proc/drbd
Reply With Quote
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Ubuntu 9.10 with KVM and copying KVM disk image to LVM yowzator HOWTO-Related Questions 3 8th March 2010 14:29
[OpenFiler / HA] Heartbeat can not activate LVM volume: open file descriptors khamikaze HOWTO-Related Questions 8 2nd February 2010 16:25
RAID-1 + LVM + drbd? SoftDux Installation/Configuration 2 8th March 2008 10:48
debian, lvm, raid, grub, custom kernel koi Installation/Configuration 4 3rd November 2007 11:39
Gentoo Cluster using heartbeat and drbd problem nekromancer Server Operation 1 30th November 2006 10:16


All times are GMT +2. The time now is 10:18.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.