softwareraid + LVM + drbd

Discussion in 'Installation/Configuration' started by dexjul, Apr 27, 2010.

  1. dexjul

    dexjul New Member

    Hi All,

    I am confused on the how to of drbd.

    Im configure my disk in software raid and lvm

    Here's my disk slice and same with the other server

    ######################

    [root@postfix1 /]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
    9.7G 1.1G 8.2G 12% /
    /dev/md0 289M 23M 252M 9% /boot
    /dev/mapper/VolGroup00-LogVol02
    9.7G 151M 9.1G 2% /opt
    /dev/mapper/VolGroup00-LogVol01
    9.7G 224M 9.0G 3% /var
    tmpfs 125M 0 125M 0% /dev/shm

    ######################


    I installed drbd and I want to replicate the data on the other server.

    Here' the drbr.conf of both server


    ######################

    resource repdata {
    protocol C;
    on postfix1.server1.com {
    device /dev/drbd0;
    disk /dev/mapper/VolGroup00-LogVol00;
    address 192.168.88.188:7789;
    meta-disk internal;
    }
    on postfix2.server2.com {
    device /dev/drbd0;
    disk /dev/mapper/VolGroup00-LogVol00;
    address 192.168.1.189:7789;
    meta-disk internal;
    }
    }

    #######################


    When I started drbd I got an error.


    [root@postfix1 /]# service drbd start
    Starting DRBD resources: [ d(repdata) /dev/drbd0: Failure: (114) Lower device is already claimed. This usually means it is mounted.

    [repdata] cmd /sbin/drbdsetup /dev/drbd0 disk /dev/mapper/VolGroup00-LogVol00 /dev/mapper/VolGroup00-LogVol00 internal --set-defaults --create-device failed - continuing!

    ]..........
    ***************************************************************
    DRBD's startup script waits for the peer node(s) to appear.
    - In case this node was already a degraded cluster before the
    reboot the timeout is 0 seconds. [degr-wfc-timeout]
    - If the peer was available before the reboot the timeout will
    expire after 0 seconds. [wfc-timeout]
    (These values are for resource 'repdata'; 0 sec -> wait forever)
    To abort waiting enter 'yes' [ 345]:

    ######################

    [root@postfix1 /]# drbdadm create-md repdata
    md_offset 10737414144
    al_offset 10737381376
    bm_offset 10737053696

    Found ext3 filesystem which uses 10485760 kB
    current configuration leaves usable 10485404 kB

    Device size would be truncated, which
    would corrupt data and result in
    'access beyond end of device' errors.
    You need to either
    * use external meta data (recommended)
    * shrink that filesystem first
    * zero out the device (destroy the filesystem)
    Operation refused.

    Command 'drbdmeta /dev/drbd0 v08 /dev/mapper/VolGroup00-LogVol00 internal create-md' terminated with exit code 40

    #####################


    Any Idea what Im miissing?


    Dexter
     
  2. Mark_NL

    Mark_NL New Member

    You already mounted /dev/mapper/VolGroup00-LogVol00 as root (/)

    that's not possible..
    You should umount /dev/mapper/VolGroup00-LogVol00 and remount /dev/drbd0 as /

    though if you you use drbd for you root device, i think you should be sure you have it in your initrd.img so you can boot from it .. still i think it's a strange setup. Let's say if your server crashes, the other will take over, now you're not able to (re)boot the first server anymore, because you can't have 2 primary nodes (it's possible, but you'll break everything)
     
  3. dexjul

    dexjul New Member

    Hi Mark,

    I can now sucessfull installed the drdb and hearbeat.

    drdb are working on 2 nodes but they are not auto failed over on the secondary node

    I got this error [root@postfix1 /]# mount /dev/drbd1 /servermirror/
    mount: block device /dev/drbd1 is write-protected, mounting read-only
    mount: Wrong medium type

    here's the drbr.conf

    resource servermirror {
    protocol C;
    on postfix1.server1.com {
    device /dev/drbd1;
    disk /dev/VolGroup00/LogVol05;
    address 192.168.88.188:7789;
    meta-disk internal;
    }
    on postfix2.server2.com {
    device /dev/drbd1;
    disk /dev/VolGroup00/LogVol05;
    address 192.168.88.189:7789;
    meta-disk internal;
    }
    }

    ==================


    haresource

    postfix1.server1.com IPaddr::192.168.88.205 drbddisk::servermirror
    postfix1.server1.com Filesystem::/dev/drbd1::/servermirror::ext3



    Thanks

    Dexter
     
  4. Mark_NL

    Mark_NL New Member

    This will most likely be that your drbd disc is not made primary

    do:
    Code:
    cat /proc/drbd
     

Share This Page