How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB Configuration) (Debian Lenny) - Page 3

6 Preparing GRUB

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)
grub> root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83

grub>
setup (hd0)
grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  17 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd0) (hd0)1+17 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>
root (hd1,0)
grub> root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd

grub>
setup (hd1)
grub> setup (hd1)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd1)"...  17 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd1) (hd1)1+17 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>
quit

Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:

reboot

 

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 in the output of

df -h
server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/debian-root
                      4.2G  748M  3.2G  19% /
tmpfs                 126M     0  126M   0% /lib/init/rw
udev                   10M  108K  9.9M   2% /dev
tmpfs                 126M     0  126M   0% /dev/shm
/dev/md0              236M   32M  192M  15% /boot
server1:~#

The output of

cat /proc/mdstat

should be as follows:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda5[0] sdb5[1]
      4988032 blocks [2/2] [UU]

md0 : active raid1 sdb1[1]
      248896 blocks [2/1] [_U]

unused devices: <none>
server1:~#

The outputs of pvdisplay, vgdisplay, and lvdisplay should be as follows:

pvdisplay
server1:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               debian
  PV Size               4.76 GB / not usable 3.12 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              1217
  Free PE               0
  Allocated PE          1217
  PV UUID               rwRQ4h-Cxii-coUC-ibA0-2tV0-umae-3XC083

server1:~#
vgdisplay
server1:~# vgdisplay
  --- Volume group ---
  VG Name               debian
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.75 GB
  PE Size               4.00 MB
  Total PE              1217
  Alloc PE / Size       1217 / 4.75 GB
  Free  PE / Size       0 / 0
  VG UUID               4UfyCV-s32P-uZ5R-asRH-9Jjg-pkF6-d5wi32

server1:~#
lvdisplay
server1:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/debian/root
  VG Name                debian
  LV UUID                N58aS0-n1uV-32gb-S51m-kP75-sfA5-38SMVo
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.19 GB
  Current LE             1072
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/debian/swap_1
  VG Name                debian
  LV UUID                IGWTnc-Zgmr-pKW8-Jcp6-URYF-2j2D-Ile6kQ
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                580.00 MB
  Current LE             145
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

server1:~#

Now we must change the partition type of /dev/sda1 to Linux raid autodetect as well:

fdisk /dev/sda

server1:~# fdisk /dev/sda

Command (m for help):
 <-- t
Partition number (1-5): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
server1:~#

Now we can add /dev/sda1 to the /dev/md0 RAID array:

mdadm --add /dev/md0 /dev/sda1

Now take a look at

cat /proc/mdstat
server1:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda5[0] sdb5[1]
      4988032 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      248896 blocks [2/2] [UU]

unused devices: <none>
server1:~#

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Wed, 19 Aug 2009 16:13:09 +0200
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=00d834dc:29cbe6c1:325ecf68:79913751
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=8ce23a63:d17dd58b:325ecf68:79913751

Reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running LVM system!

Share this page:

0 Comment(s)